url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27144/events
|
https://github.com/huggingface/transformers/pull/27144
| 1,968,039,955 |
PR_kwDOCUB6oc5eGlow
| 27,144 |
[`Styling`] stylify using ruff
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,700 | 1,700 |
COLLABORATOR
| null |
# What does this PR do?
Removes our dependency on `black` in favor of rust:
- faster `make style` I'll try to get numbers but that one is pretty obvious
- not need to use doc-builder's black styling as the doc is styled in ruff
- faster overall CI to check the repo quality
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27144/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/27144/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27144",
"html_url": "https://github.com/huggingface/transformers/pull/27144",
"diff_url": "https://github.com/huggingface/transformers/pull/27144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27144.patch",
"merged_at": 1700153000000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27143/events
|
https://github.com/huggingface/transformers/issues/27143
| 1,968,006,488 |
I_kwDOCUB6oc51TWVY
| 27,143 |
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
|
{
"login": "888yyh",
"id": 58061084,
"node_id": "MDQ6VXNlcjU4MDYxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/58061084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/888yyh",
"html_url": "https://github.com/888yyh",
"followers_url": "https://api.github.com/users/888yyh/followers",
"following_url": "https://api.github.com/users/888yyh/following{/other_user}",
"gists_url": "https://api.github.com/users/888yyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/888yyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/888yyh/subscriptions",
"organizations_url": "https://api.github.com/users/888yyh/orgs",
"repos_url": "https://api.github.com/users/888yyh/repos",
"events_url": "https://api.github.com/users/888yyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/888yyh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @888yyh, thanks for raising this issue! \r\n\r\nSo that we can help you, please make sure to follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) and [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md) as well as filling out all of the information in the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml).\r\n\r\nIn this instance, could you: \r\n* Provide a code snippet we can run to recreate the error. We don't have access to `\"./model\"`\r\n* Provide information on the running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* Clarify what hardware this is running on (CPU, GPU etc.)\r\n* Format the code example to use markdown multiline code formatting. Any code and traceback errors should be between three backticks like so ` ``` CODE HERE ``` `",
"> Hi @888yyh, thanks for raising this issue!\r\n> \r\n> So that we can help you, please make sure to follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) and [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md) as well as filling out all of the information in the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml).\r\n> \r\n> In this instance, could you:\r\n> \r\n> * Provide a code snippet we can run to recreate the error. We don't have access to `\"./model\"`\r\n> * Provide information on the running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n> * Clarify what hardware this is running on (CPU, GPU etc.)\r\n> * Format the code example to use markdown multiline code formatting. Any code and traceback errors should be between three backticks like so `` ``` CODE HERE ``` ``\r\n\"./model\" is the path saving model weights\r\ncode:\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModel\r\nfrom modeling_chatglm import ChatGLMForConditionalGeneration\r\n# tokenizer = AutoTokenizer.from_pretrained(\r\n# \".\\\\model\", cache_dir ='./model1' ,trust_remote_code=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"./model\", trust_remote_code=True)\r\n# model = ChatGLMForConditionalGeneration.from_pretrained(\r\n # \"./model\").half().cuda()\r\nmodel = ChatGLMForConditionalGeneration.from_pretrained(\r\n \"./model\", torch_dtype=torch.float32)\r\n# model = ChatGLMForConditionalGeneration.from_pretrained(\r\n# \"./model\", cache_dir = './model1').float()\r\nwhile True:\r\n a = input(\"请输入您的问题:(输入q以退出)\")\r\n if a.strip() == 'q':\r\n \r\n exit()\r\n response, history = model.chat(tokenizer, \"问题:\" + a.strip() + '\\n答案:', max_length=256, history=[])\r\n print(\"回答:\", response)\r\n",
"transformers==4.16.1 torch 2.0.0+cpu' windows",
"@888yyh Please follow the request above and format the code example properly using markdown code formatting like so: ` ``` CODE GOES HERE ``` `.\r\n\r\nWe don't have access to `model` and so cannot run to code to reproduce the error. You'll need to provide an example with a publicly available checkpoint in which the same error occurs. \r\n\r\nPlease note that 4.16.1 is quite an old version of transformers, we are currently on 4.34.1. Please try upgrading your version of transformers. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,702 | 1,702 |
NONE
| null |
### System Info
windows cpu
i can only use gpu,and ran the code below
import torch
from transformers import AutoTokenizer, AutoModel
from modeling_chatglm import ChatGLMForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained(
"./model", trust_remote_code=True)
model = ChatGLMForConditionalGeneration.from_pretrained(
"./model").half()
while True:
a = input("请输入您的问题:(输入q以退出)")
if a.strip() == 'q':
exit()
response, history = model.chat(tokenizer, "问题:" + a.strip() + '\n答案:', max_length=256, history=[])
print("回答:", response)
errors:RuntimeError: "LayerNormKernelImpl" not implemented for 'Half',however i use model = ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.float32),error "RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of bFloat "
if i use ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.Bfloat) errors"RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float"
how can i run the code correctly?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
windows cpu
i can only use gpu,and ran the code below
import torch
from transformers import AutoTokenizer, AutoModel
from modeling_chatglm import ChatGLMForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained(
"./model", trust_remote_code=True)
model = ChatGLMForConditionalGeneration.from_pretrained(
"./model").half()
while True:
a = input("请输入您的问题:(输入q以退出)")
if a.strip() == 'q':
exit()
response, history = model.chat(tokenizer, "问题:" + a.strip() + '\n答案:', max_length=256, history=[])
print("回答:", response)
errors:RuntimeError: "LayerNormKernelImpl" not implemented for 'Half',however i use model = ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.float32),error "RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of bFloat "
if i use ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.Bfloat) errors"RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float"
how can i run the code correctly?
### Expected behavior
windows cpu
i can only use gpu,and ran the code below
import torch
from transformers import AutoTokenizer, AutoModel
from modeling_chatglm import ChatGLMForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained(
"./model", trust_remote_code=True)
model = ChatGLMForConditionalGeneration.from_pretrained(
"./model").half()
while True:
a = input("请输入您的问题:(输入q以退出)")
if a.strip() == 'q':
exit()
response, history = model.chat(tokenizer, "问题:" + a.strip() + '\n答案:', max_length=256, history=[])
print("回答:", response)
errors:RuntimeError: "LayerNormKernelImpl" not implemented for 'Half',however i use model = ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.float32),error "RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of bFloat "
if i use ChatGLMForConditionalGeneration.from_pretrained(
"./model",torch_type=torch.Bfloat) errors"RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float"
how can i run the code correctly?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27143/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27142/events
|
https://github.com/huggingface/transformers/issues/27142
| 1,967,964,217 |
I_kwDOCUB6oc51TMA5
| 27,142 |
Inaccurate code example within inline code-documentation
|
{
"login": "thariq-nugrohotomo",
"id": 87131717,
"node_id": "MDQ6VXNlcjg3MTMxNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/87131717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thariq-nugrohotomo",
"html_url": "https://github.com/thariq-nugrohotomo",
"followers_url": "https://api.github.com/users/thariq-nugrohotomo/followers",
"following_url": "https://api.github.com/users/thariq-nugrohotomo/following{/other_user}",
"gists_url": "https://api.github.com/users/thariq-nugrohotomo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thariq-nugrohotomo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thariq-nugrohotomo/subscriptions",
"organizations_url": "https://api.github.com/users/thariq-nugrohotomo/orgs",
"repos_url": "https://api.github.com/users/thariq-nugrohotomo/repos",
"events_url": "https://api.github.com/users/thariq-nugrohotomo/events{/privacy}",
"received_events_url": "https://api.github.com/users/thariq-nugrohotomo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @thariq-nugrohotomo, thanks for raising this issue! \r\n\r\nIndeed, it seems that the behaviour in the original implementation is different. Would you like to open a PR updating the code examples in modeling_vision_encoder_decoder.py and modeling_trocr.py? This way you get the github contribution. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,702 | 1,702 |
NONE
| null |
I believe this code example contains inaccurate information:
https://github.com/huggingface/transformers/blob/211ad4c9cc1c0882c4a22eaca7b4d7d1e2f264b3/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L550C15-L550C15
```python
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
```
TrOCR decoder_start_token should be `eos` instead of `cls` or `bos`.
Using the pretrained model, when I pass `cls` or `bos` as the initial decoder token, the output (first decoded token) rarely get correct. But once I try to use `eos`, the output is correct, or at least similar with the output returned by `model.generate()`.
In the official code from Microsoft, they will fallback to `eos` if the token is not specified https://github.com/microsoft/unilm/blob/6f60612e7cc86a2a1ae85c47231507a587ab4e01/trocr/generator.py#L84
Code excerpt to manually see the first decoded token:
```python
decoder_start_token_id = processor.tokenizer.eos_token_id
x = model(pixel_values, torch.tensor([[decoder_start_token_id]]))
x = x.logits
x = torch.argmax(x, -1)
print(processor.tokenizer.batch_decode(x))
```
Switch `eos_token_id` to `cls_token_id` or `bos_token_id` to observe the incorrect output.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27142/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27141/events
|
https://github.com/huggingface/transformers/pull/27141
| 1,967,898,252 |
PR_kwDOCUB6oc5eGG1u
| 27,141 |
[FEAT] Add Neftune into transformers Trainer
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Added a test and a relevant documentation section, this PR is ready for final review!"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
As per title
Fixes: https://github.com/huggingface/trl/issues/923
Fixes: https://github.com/huggingface/transformers/issues/26899
This PR adds NEFTune: a new technique for enhancing Supervised fine-tuning results results proposed in: https://arxiv.org/abs/2310.05914

I propose a very simple API which is as simple as passing a valid `neftune_noise_alpha` argument when initializing the `TrainingArguments`. To avoid any surprising behaviour, we should revert to the original forward method at the end of the training. This is handled inside the inner training loop that attaches the correct forward hook before the beginning of training, and makes sure to remove it right after training the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27141/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27141/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27141",
"html_url": "https://github.com/huggingface/transformers/pull/27141",
"diff_url": "https://github.com/huggingface/transformers/pull/27141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27141.patch",
"merged_at": 1698764640000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27140/events
|
https://github.com/huggingface/transformers/issues/27140
| 1,967,832,259 |
I_kwDOCUB6oc51SrzD
| 27,140 |
Tokenizer wrongly splits the drug SMILES
|
{
"login": "Chris-Tang6",
"id": 53926670,
"node_id": "MDQ6VXNlcjUzOTI2Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/53926670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chris-Tang6",
"html_url": "https://github.com/Chris-Tang6",
"followers_url": "https://api.github.com/users/Chris-Tang6/followers",
"following_url": "https://api.github.com/users/Chris-Tang6/following{/other_user}",
"gists_url": "https://api.github.com/users/Chris-Tang6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chris-Tang6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chris-Tang6/subscriptions",
"organizations_url": "https://api.github.com/users/Chris-Tang6/orgs",
"repos_url": "https://api.github.com/users/Chris-Tang6/repos",
"events_url": "https://api.github.com/users/Chris-Tang6/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chris-Tang6/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Chris-Tang6, thanks for raising this issue! \r\n\r\nI'm not a chemist myself, so want to make sure I've understood the desired behaviour. Could you provide what you would have expected the tokenized sequence to be? Should this also apply to `C0`, `C2` and `C3` as well as `C1`? \r\n\r\nI'm not sure what `vocab_sub2num` is in the provided example. However, if I load the tokenizer from the code snippet, I can see that `C1` isn't in the vocabulary. \r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_name = \"DeepChem/ChemBERTa-77M-MLM\" \r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\n# Prints True\r\nprint('C' in tokenizer.vocab)\r\n\r\n# Prints False\r\nprint('C1' in tokenizer.vocab)\r\n```",
"Thank you @amyeroberts 😁\r\nIn my opinion, If the 'C1' or 'C0‘ not in the token-vocab, it is acceptable to split as 'C'. I only hope to split the seqs following the token-vocab and don't have semantic errors.\r\n\r\nAlso, the `vocab_sub2num` is caculated as follow:\r\n```\r\nvocab_sub2num = tokenizer.vocab\r\nvocab_num2sub = {value:key for key,value in vocab_sub2num.items()}\r\n```\r\n",
"Hey! Correct me if I am wrong, but the issue is with `l` not `1`. See the following snippet:\r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> model_name = \"DeepChem/ChemBERTa-77M-MLM\" \r\n>>> tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n>>> tokenizer.tokenize(\"C1\")\r\n['C', '1']\r\n>>> tokenizer.tokenize(\"Cl\")\r\n['C']\r\n```\r\n`l` is not part of the vocab but should still be tokenized and then encoded as `[UNK]`.\r\n\r\nA quick fix is to use the slow tokenizer. I don't really know why it doesn't work and can try to see if it's expected or not. \r\n\r\n```python \r\n>>> tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = False)\r\n>>> tokenizer.encode(\"Cl\")\r\n[12, 16, 11, 13]\r\n```\r\n\r\n",
"Hi @ArthurZucker, you are right. I tried the slow tokenizer, but it still not works well.\r\n``` python\r\n>>> seq = 'COC1=C(C=C2C(=C1)CCN=C2C3=CC(=C(C=C3)Cl)Cl)Cl'\r\n>>> tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = False)\r\n>>> tokenizer(seq, return_tensors=\"pt\", padding=False, truncation=True)\r\ntensor([[12, 16, 19, 16, 20, 22, 16, 17, 16, 22, 16, 21, 16, 17, 22, 16, 20, 18,\r\n 16, 16, 23, 22, 16, 21, 16, 26, 22, 16, 16, 17, 22, 16, 17, 16, 22, 16,\r\n 26, 18, 16, 11, 18, 16, 11, 18, 16, 11, 13]])\r\n```",
"Hey, could you elaborate on what is not working well? ",
"Hi, I read again about the [ChemBERTa paper](https://[arxiv.org/pdf/2010.09885.pdf](https://arxiv.org/pdf/2010.09885.pdf)). It seems that the problem is not on the tokenizer.\r\n\r\nThe author compaired two tokenizer: BPE tokenizer and Smiles tokenizer(in deepchem). For the BPE can replace the rare or unknown words with konwn words. And I think the `AutoTokenizer` is same as the `BPE` that's why the 'Cl' is replaced with 'C'. \r\n\r\nIn contrast, the Smilestokenizer can hold the rare words(e.g. `Cl`, `Mn`, `Cu`) and the result is slightly better than the BPE.\r\n\r\n\r\nAnd the usage of SmilesTokenizer as follow:\r\nThe vocab.txt from [here](https://github.com/seyonechithrananda/bert-loves-chemistry/blob/b0b29205b4db002a043152e0dd3c0d23fbd351fe/chemberta/bertviz_clone/vocab.txt#L4).\r\n``` python\r\nfrom deepchem.feat import SmilesTokenizer\r\nvocab_path = './vocab.txt' # path to vocab.txt\r\ntokenizer = SmilesTokenizer(vocab_path) # SmilesTokenizer\r\ntokenizer.tokenize('Cl')\r\n>>> ['Cl']\r\n```\r\n\r\n",
"Note that you can also always add tokens to the vocab after training using `tokenizer.add_tokens([\"Cl\", \"Mn\"])` etc. \r\nYou can also enable bytefallback for unkown words, adding 256 unicodes to the vocab to not lose the raw information.\r\nAnyway glad you issue is fixed! "
] | 1,698 | 1,699 | 1,699 |
NONE
| null |
### System Info
- `transformers` version: 4.33.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cpu (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker Hello Arthur😊 I have a problem when I use ChemBerta to get pretrained Drug feature. One step is to use Tokenizer to split the drug SMILES to single atoms. And I found the splited atoms are not match with the original seq.
e.g. The Iuput seq is ```COC1=C(C=C2C(=C1)CCN=C2C3=CC(=C(C=C3)Cl)Cl)Cl```
However the output of tokenizer incorrectly labeled the 'Cl' as 'C', I don't konw how the fix it. And I also found that in the ChenBERTa token table, it indeed has the 'Cl' token.

The output of the tokenize() result as follow:
```['C', 'O', 'C', '1', '=', 'C', '(', 'C', '=', 'C', '2', 'C', '(', '=', 'C', '1', ')', 'C', 'C', 'N', '=', 'C', '2', 'C', '3', '=', 'C', 'C', '(', '=', 'C', '(', 'C', '=', 'C', '3', ')', 'C', ')', 'C', ')', 'C']```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code as below:
```
from IPython.display import clear_output as clr
import numpy as np
import pandas as pd
import torch
from tqdm import tqdm
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch.nn as nn
model_name = "DeepChem/ChemBERTa-77M-MLM"
chemberta = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(chemberta)
chemberta._modules["lm_head"] = nn.Identity()
chemberta.eval()
seq = 'COC1=C(C=C2C(=C1)CCN=C2C3=CC(=C(C=C3)Cl)Cl)Cl'
encoded_input = tokenizer(seq, return_tensors="pt", padding=False, truncation=True, add_special_tokens=False)
encoded_token = tokenizer.tokenize(seq)
model_output = chemberta(**encoded_input)
print(len(seq), seq)
print(encoded_input.input_ids)
print(len(encoded_token), encoded_token)
print(model_output)
```
### Expected behavior
I hope to split the SMILES seq coorectly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27140/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27139/events
|
https://github.com/huggingface/transformers/issues/27139
| 1,967,766,965 |
I_kwDOCUB6oc51Sb21
| 27,139 |
Why isn't intermediate_size 4 * hidden_size for Llama as in paper?
|
{
"login": "sytelus",
"id": 2096835,
"node_id": "MDQ6VXNlcjIwOTY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2096835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sytelus",
"html_url": "https://github.com/sytelus",
"followers_url": "https://api.github.com/users/sytelus/followers",
"following_url": "https://api.github.com/users/sytelus/following{/other_user}",
"gists_url": "https://api.github.com/users/sytelus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sytelus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sytelus/subscriptions",
"organizations_url": "https://api.github.com/users/sytelus/orgs",
"repos_url": "https://api.github.com/users/sytelus/repos",
"events_url": "https://api.github.com/users/sytelus/events{/privacy}",
"received_events_url": "https://api.github.com/users/sytelus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Oops my bad. It's because of SwiGLU.\r\n\r\n```\r\nINTERMEDIATE_SIZE_MAP = {\r\n \"7B\": 11008,\r\n \"13B\": 13824,\r\n \"30B\": 17920,\r\n \"65B\": 22016,\r\n}\r\n```\r\n\r\nThe intermediate_size is calculated as follows:\r\n\r\n```\r\ndef compute_intermediate_size(n):\r\n return int(math.ceil(n * 8 / 3) + 255) // 256 * 256\r\n```"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
Llama paper uses 4 times hidden size for MLP's intermediate layer. So, for 7B model, `hidden_size=4096` should give us `intermediate_size=16384`. However, Llama config defaults it to `11008`.
https://github.com/huggingface/transformers/blob/722e9364916e527e8d46cbd828a1516bf6aaebd6/src/transformers/models/llama/configuration_llama.py#L47
Any idea why?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27139/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27138/events
|
https://github.com/huggingface/transformers/issues/27138
| 1,967,404,132 |
I_kwDOCUB6oc51RDRk
| 27,138 |
save_total_limit incorrectly deletes checkpoints
|
{
"login": "huchinlp",
"id": 40781986,
"node_id": "MDQ6VXNlcjQwNzgxOTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/40781986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huchinlp",
"html_url": "https://github.com/huchinlp",
"followers_url": "https://api.github.com/users/huchinlp/followers",
"following_url": "https://api.github.com/users/huchinlp/following{/other_user}",
"gists_url": "https://api.github.com/users/huchinlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huchinlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huchinlp/subscriptions",
"organizations_url": "https://api.github.com/users/huchinlp/orgs",
"repos_url": "https://api.github.com/users/huchinlp/repos",
"events_url": "https://api.github.com/users/huchinlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/huchinlp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"FYI in case this is relevant:\r\n- https://github.com/huggingface/transformers/issues/26961",
"cc @muellerz @pacman100 ",
"Hello, please refer to https://github.com/huggingface/transformers/issues/26961#issuecomment-1827691624",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,704 | 1,704 |
NONE
| null |
### System Info
transformers==4.34.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I set `save_total_limit` to 10, but surprisingly found it deleting my **latest** model. It was supposed to keep checkpoint-10000 and subsequent checkpoints, but instead, I found that the following checkpoints remained:
```checkpoint-9000
checkpoint-9100
checkpoint-9200
checkpoint-9300
checkpoint-9400
checkpoint-9500
checkpoint-9600
checkpoint-9700
checkpoint-9800
checkpoint-9900
```
I suspect that this may be due to the deletion process using the string names of the files rather than their numerical value.
### Expected behavior
It should keep the latest checkpoints.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27138/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27137/events
|
https://github.com/huggingface/transformers/pull/27137
| 1,967,348,954 |
PR_kwDOCUB6oc5eEOLc
| 27,137 |
automatically generated code
|
{
"login": "yunyicheng",
"id": 55462866,
"node_id": "MDQ6VXNlcjU1NDYyODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/55462866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunyicheng",
"html_url": "https://github.com/yunyicheng",
"followers_url": "https://api.github.com/users/yunyicheng/followers",
"following_url": "https://api.github.com/users/yunyicheng/following{/other_user}",
"gists_url": "https://api.github.com/users/yunyicheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunyicheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunyicheng/subscriptions",
"organizations_url": "https://api.github.com/users/yunyicheng/orgs",
"repos_url": "https://api.github.com/users/yunyicheng/repos",
"events_url": "https://api.github.com/users/yunyicheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunyicheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @yunyicheng ,\r\nThank you for wanting to contribute! I think this model should be provided/used as custom code rather than be added directly into `transformers`.\r\n",
"Hello @clefourrier , thank you for your reply! Even though Massformer is partially built upon Graphormer, it has many differences in architecture. Thus I think it is better to add a new model to transformers. Could you tell me your reasons for making it a custom model rather than a new model? I would like to hear them before making a decision on how I should proceed.",
"Hi @yunyicheng ,\r\nAs adding models to the core of transformers creates a lot of maintenance work for the team, as well as significant time and effort for contributors to match the general lib design, we only add popular and important models for the community, which we see through how used the models are. \r\nIt would therefore be better to first add your model as custom code on the hub, then see how much it is used by the general community. It will also be easier to integrate it! Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding a new transformers model MassFormer based on Graphormer for mass spectrum prediction.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- original implementation by : @adamoyoung
- graph models: @clefourrier
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27137/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27137",
"html_url": "https://github.com/huggingface/transformers/pull/27137",
"diff_url": "https://github.com/huggingface/transformers/pull/27137.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27137.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27136/events
|
https://github.com/huggingface/transformers/pull/27136
| 1,967,303,989 |
PR_kwDOCUB6oc5eEEan
| 27,136 |
Removed the redundant SiLUActivation class.
|
{
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi! I first set RUN_SLOW=1 and then ran the PyTest command to verify that all of my tests passed successfully.\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27136). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
Removed the redundant **SiLUActivation** class and now use **nn.SiLU** directly.
See now what looks like:

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27136/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27136",
"html_url": "https://github.com/huggingface/transformers/pull/27136",
"diff_url": "https://github.com/huggingface/transformers/pull/27136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27136.patch",
"merged_at": 1698948837000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27135/events
|
https://github.com/huggingface/transformers/issues/27135
| 1,967,166,248 |
I_kwDOCUB6oc51QJMo
| 27,135 |
Failed to import 'Wav2Vec2PhonemeEncoder' from 'transformers'
|
{
"login": "Suryatejakalapala",
"id": 83916568,
"node_id": "MDQ6VXNlcjgzOTE2NTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/83916568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Suryatejakalapala",
"html_url": "https://github.com/Suryatejakalapala",
"followers_url": "https://api.github.com/users/Suryatejakalapala/followers",
"following_url": "https://api.github.com/users/Suryatejakalapala/following{/other_user}",
"gists_url": "https://api.github.com/users/Suryatejakalapala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Suryatejakalapala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Suryatejakalapala/subscriptions",
"organizations_url": "https://api.github.com/users/Suryatejakalapala/orgs",
"repos_url": "https://api.github.com/users/Suryatejakalapala/repos",
"events_url": "https://api.github.com/users/Suryatejakalapala/events{/privacy}",
"received_events_url": "https://api.github.com/users/Suryatejakalapala/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Suryatejakalapala, thanks for raising this issue! \r\n\r\nI can't find any reference to the class `Wav2Vec2PhonemeEncoder` [in the library](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+Wav2Vec2PhonemeEncoder&type=code). Is there a particular example in docs or resources which uses this class? ",
"I am not familiar with the transformer library I got the phoneme_encoder from bard(AI) if there is smth wrong Sorry for wasting your time ",
"OK, it's likely Bard just hallucinated that class then 👍 ",
"Ha thanks for the info then I am closing "
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.34.1
- Platform: Windows-10-10.0.23531-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cpu (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just Importing Error
### Expected behavior
Cant import
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27135/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27134/events
|
https://github.com/huggingface/transformers/pull/27134
| 1,967,131,039 |
PR_kwDOCUB6oc5eDgGy
| 27,134 |
🌐 [i18n-ZH] Translate tflite.md into Chinese
|
{
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27134). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Translate tflite.md into Chinese
part of #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27134/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27134",
"html_url": "https://github.com/huggingface/transformers/pull/27134",
"diff_url": "https://github.com/huggingface/transformers/pull/27134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27134.patch",
"merged_at": 1698781848000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27133/events
|
https://github.com/huggingface/transformers/pull/27133
| 1,967,129,482 |
PR_kwDOCUB6oc5eDfy4
| 27,133 |
Fuyu processing update
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@pcuenca Here's the draft PR for updating the image processor. In relation to your PR with the box coordinate transformations, you'll notice that I've removed the `target_height` and `target_width` attributes and have replaced them with the dictionary `size`. This is to reflect the pattern in other image processors.",
"cc @molbap ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts Nice! I'll update accordingly.",
"LGTM, I'll add some tests related to model in my PR! Ok to merge to https://github.com/huggingface/transformers/pull/27007 when https://github.com/amyeroberts/transformers/pull/113 is merged, and I'll add a model tester there"
] | 1,698 | 1,698 | 1,698 |
COLLABORATOR
| null |
# What does this PR do?
This PR builds upon #27007 - ticking off some elements in the TODO list and bringing the processor and image processor more in-line with expected patterns in the library.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27133/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27133",
"html_url": "https://github.com/huggingface/transformers/pull/27133",
"diff_url": "https://github.com/huggingface/transformers/pull/27133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27133.patch",
"merged_at": 1698864793000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27132/events
|
https://github.com/huggingface/transformers/issues/27132
| 1,966,960,672 |
I_kwDOCUB6oc51PXAg
| 27,132 |
Fast tokenizer breaks added tokens
|
{
"login": "geronimi73",
"id": 141400217,
"node_id": "U_kgDOCG2YmQ",
"avatar_url": "https://avatars.githubusercontent.com/u/141400217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geronimi73",
"html_url": "https://github.com/geronimi73",
"followers_url": "https://api.github.com/users/geronimi73/followers",
"following_url": "https://api.github.com/users/geronimi73/following{/other_user}",
"gists_url": "https://api.github.com/users/geronimi73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geronimi73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geronimi73/subscriptions",
"organizations_url": "https://api.github.com/users/geronimi73/orgs",
"repos_url": "https://api.github.com/users/geronimi73/repos",
"events_url": "https://api.github.com/users/geronimi73/events{/privacy}",
"received_events_url": "https://api.github.com/users/geronimi73/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @geronimi73, thanks for raising an issue! \r\n\r\n@ArthurZucker is off for this week and is the main person who knows and works with the tokenizers, so you might have to wait until then to have an answer. \r\n\r\n@Rocketknight1 any chance you know what's happening? ",
"Hi @geronimi73, I'll wait for @ArthurZucker to return to give a full answer here, but in the meantime I think the issue is that when you add a normal token, the tokenizer may split it. If you want to preserve an important control token like `<|im_start|>` you should make it a special token. Try doing this instead:\r\n\r\n```python\r\ntokenizer.add_special_tokens({\"additional_special_tokens\": [\"<|im_start|>\"]})\r\ntokenizer.add_special_tokens({\"eos_token\": \"<|im_end|>\"})\r\n```",
"Well, it's partly true partly wrong 😅 \r\nWhen you add a token, if it is not special, it will be normalized by default. I'll add the `add_tokens` function to the doc it seems that it was removed. But anyway, the Llama normalizer adds a `SPIECE_UNDERLINE` at the beginning of the special tokens, which will thus be a different token. AddedTokens (special or not) should never be splitted, but the content of the added tokens is affected by the normalizer",
"ok, thanks!",
"it's somehow working now. \r\n\r\njust to **sum this up** for others who are struggling with this too:\r\n- I raised this issue because the fast tokenizer breaks the ChatML tag `<|im_start|>` into several tokens even though it was added with `tokenizer.add_tokens([\"<|im_start|>\"])`, slow tokenizer works fine\r\n- @ArthurZucker explains above, Llama normalizer adds a SPIECE_UNDERLINE; indeed, fast tokenizer encodes `<|im_start|>` correctly when token is added with ..\r\n```python\r\ntokenizer.add_tokens(\r\n\tAddedToken(\"<|im_start|>\",normalized=False))\r\n)\r\n\r\n```\r\n- but, new problem. decoding now **adds a space** after added tokens, example\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"../models/llama2-7b\", use_fast=True, legacy=False)\r\n\r\ntokenizer.add_tokens(\r\n\tAddedToken(\"<|im_start|>\",normalized=False, rstrip=True, lstrip=False)\r\n)\r\ntokenizer.add_special_tokens({\"eos_token\": \"<|im_end|>\"})\r\n\r\n# https://huggingface.co/docs/transformers/main/chat_templating\r\ntokenizer.chat_template = \"{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\\n' }}{% endif %}\"\r\n\r\nmessages=[\r\n {\"role\": \"user\", \"content\": \"Hi there!\"},\r\n {\"role\": \"assistant\", \"content\": \"Nice to meet you!\"}\r\n]\r\n\r\nchat = tokenizer.apply_chat_template(messages, tokenize=False)\r\nchat_tokenized = tokenizer(chat, add_special_tokens=False)[\"input_ids\"]\r\n\r\nprint(\"INPUT\")\r\nprint(chat)\r\nprint(\"-\"*30)\r\nprint(\"DECODE(ENCODE(INPUT))\")\r\nprint(tokenizer.decode(chat_tokenized))\r\n\r\n# INPUT\r\n# <|im_start|>user\r\n# Hi there!<|im_end|>\r\n# <|im_start|>assistant\r\n# Nice to meet you!<|im_end|>\r\n\r\n# ------------------------------\r\n# DECODE(ENCODE(INPUT))\r\n# <|im_start|> user\r\n# Hi there!<|im_end|> \r\n# <|im_start|> assistant\r\n# Nice to meet you!<|im_end|> \r\n```\r\n\r\n- **fix all of the above**: use slow tokenizer `use_fast=False, legacy=False`, add tokens with `tokenizer.add_tokens([\"<|im_start|>\"])`, decode with `spaces_between_special_tokens=False` like this\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"../models/llama2-7b\", use_fast=False, legacy=False)\r\ntokenizer.add_tokens([\"<|im_start|>\"])\r\n...\r\nchat_tokenized = tokenizer(chat, add_special_tokens=False)[\"input_ids\"]\r\nprint(tokenizer.decode(chat_tokenized, spaces_between_special_tokens=False))\r\n```\r\n- using `transformers 4.35.0` btw",
"Thanks for the great explanation! \r\nRegarding the space added after added tokens, this PR will fix it: https://github.com/huggingface/tokenizers/pull/1357 😉 I'll have to change the Llama paradigm a little bit to make sure it's compatible ",
"feel free to play with #26678 as well 🤗 "
] | 1,698 | 1,699 | 1,699 |
NONE
| null |
### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0.dev0
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("models/llama2-7b", use_fast=False)
# add tokens for chatml
tokenizer.add_tokens(["<|im_start|>"])
tokenizer.add_special_tokens({"eos_token": "<|im_end|>"})
messages = [ {"role": "user", "content": "question"},
{"role": "assistant", "content": "answer"} ]
# https://huggingface.co/docs/transformers/main/chat_templating
tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
chat = tokenizer.apply_chat_template(messages, tokenize=False)
chat_tokenized = tokenizer(chat, add_special_tokens=False)["input_ids"]
for token in chat_tokenized:
print(f"{token} - \"{tokenizer.decode(token)}\"")
```
**output**: first occurence of `<|im_start|>` is correctly tokenized, second one is split
```
32000 - "<|im_start|>"
1792 - "user"
13 - "
"
12470 - "question"
32001 - "<|im_end|>"
29871 - ""
13 - "
"
**29966 - "<"
29989 - "|"
326 - "im"
29918 - "_"
2962 - "start"
29989 - "|"
29958 - ">"**
465 - "ass"
22137 - "istant"
13 - "
"
12011 - "answer"
32001 - "<|im_end|>"
29871 - ""
13 - "
"
```
### Expected behavior
```
32000 - "<|im_start|>"
1404 - "user"
13 - "<0x0A>"
12470 - "question"
32001 - "<|im_end|>"
29871 - ""
13 - "<0x0A>"
32000 - "<|im_start|>"
20255 - "assistant"
13 - "<0x0A>"
12011 - "answer"
32001 - "<|im_end|>"
29871 - ""
13 - "<0x0A>"
```
this is the correct output of the slow tokenizer `AutoTokenizer.from_pretrained("models/llama2-7b", use_fast=False)`
1. why does this happen with fast but not slow?
2. any other solution than **not** using the fast tokenizer?
i guess this is known, sorry if I missed it in the existing issues
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27132/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27131/events
|
https://github.com/huggingface/transformers/pull/27131
| 1,966,903,232 |
PR_kwDOCUB6oc5eCzWg
| 27,131 |
device agnostic trainer testing
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ydshieh ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27131). All of your documentation changes will be reflected on that endpoint.",
"Results on CI runner looks good."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of https://github.com/huggingface/transformers/issues/25654#issuecomment-1783704306
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27131/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27131",
"html_url": "https://github.com/huggingface/transformers/pull/27131",
"diff_url": "https://github.com/huggingface/transformers/pull/27131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27131.patch",
"merged_at": 1698689800000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27130/events
|
https://github.com/huggingface/transformers/pull/27130
| 1,966,902,710 |
PR_kwDOCUB6oc5eCzQW
| 27,130 |
[Typo fix] flag config in WANDB
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27130). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
-->
<!-- Remove if not applicable -->
fixes typo in flag for WANDB configuration
`_` was missing in `--report_to_all`
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I guess @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27130/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27130/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27130",
"html_url": "https://github.com/huggingface/transformers/pull/27130",
"diff_url": "https://github.com/huggingface/transformers/pull/27130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27130.patch",
"merged_at": 1698603747000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27129/events
|
https://github.com/huggingface/transformers/pull/27129
| 1,966,838,622 |
PR_kwDOCUB6oc5eCm-q
| 27,129 |
device agnostic pipelines testing
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"verified with Mac-Studio\r\n\r\n<img width=\"800\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/28150734/7ac37e29-26a8-4a60-a3c0-3d6efed1fc22\">\r\n\r\n<img width=\"521\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/28150734/e4d31f43-57a3-44d2-986f-688ee39b5999\">\r\n\r\n\r\n<img width=\"1918\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/28150734/8a44a09a-b36f-414e-8f88-8a6b9e9c5cfd\">\r\n\r\n\r\n",
"cc @ydshieh ",
"@ydshieh Marking this ready for review :-)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27129). All of your documentation changes will be reflected on that endpoint.",
"test run results look good, so merge! Thanks again @statelesshz "
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of https://github.com/huggingface/transformers/issues/25654#issuecomment-1783704306
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27129/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27129",
"html_url": "https://github.com/huggingface/transformers/pull/27129",
"diff_url": "https://github.com/huggingface/transformers/pull/27129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27129.patch",
"merged_at": 1698763591000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27128/events
|
https://github.com/huggingface/transformers/pull/27128
| 1,966,819,166 |
PR_kwDOCUB6oc5eCjNH
| 27,128 |
[docstring] Fix docstring for AltCLIPTextConfig, AltCLIPVisionConfig and AltCLIPConfig
|
{
"login": "AksharGoyal",
"id": 38995624,
"node_id": "MDQ6VXNlcjM4OTk1NjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/38995624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AksharGoyal",
"html_url": "https://github.com/AksharGoyal",
"followers_url": "https://api.github.com/users/AksharGoyal/followers",
"following_url": "https://api.github.com/users/AksharGoyal/following{/other_user}",
"gists_url": "https://api.github.com/users/AksharGoyal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AksharGoyal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AksharGoyal/subscriptions",
"organizations_url": "https://api.github.com/users/AksharGoyal/orgs",
"repos_url": "https://api.github.com/users/AksharGoyal/repos",
"events_url": "https://api.github.com/users/AksharGoyal/events{/privacy}",
"received_events_url": "https://api.github.com/users/AksharGoyal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @amyeroberts I have made the change. Let me know if anything else is needed from my side. For some reason removing AltCLIPVisionConfig is giving errors so I let it stay there. Any tip on resolving it would be helpful.",
"@AksharGoyal Thanks! For the removal of the `AltCLIPVisionConfig` what errors are you getting?",
"@amyeroberts This is what I see when I go to one of the [failed workflows](https://app.circleci.com/pipelines/github/huggingface/transformers/76713/workflows/18e1d257-983d-4f62-8733-8827d768d425/jobs/975389)\r\n```\r\n2023-10-29 19:05:35.437483: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n/home/circleci/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"utils/check_docstrings.py\", line 1231, in <module>\r\n check_docstrings(overwrite=args.fix_and_overwrite)\r\n File \"utils/check_docstrings.py\", line 1223, in check_docstrings\r\n raise ValueError(error_message)\r\nValueError: There was at least one problem when checking docstrings of public objects.\r\nThe following objects docstrings contain templates you need to fix: search for `<fill_type>` or `<fill_docstring>`.\r\n- AltCLIPVisionConfig\r\n\r\nExited with code exit status 1\r\n\r\n```",
"@AksharGoyal The issue is arising because some of the parameters in `AltCLIPVisionConfig`'s docstring aren't correctly filled in. You'll see now in the configuration file `<fill_docstring>` has been added in places where descriptions are needed ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27128). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
- Fixed docstrings for AltCLIPTextConfig, AltCLIPVisionConfig and AltCLIPConfig
- Cleaned few docstrings
Fixes #26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27128/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27128",
"html_url": "https://github.com/huggingface/transformers/pull/27128",
"diff_url": "https://github.com/huggingface/transformers/pull/27128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27128.patch",
"merged_at": 1698747615000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27127/events
|
https://github.com/huggingface/transformers/issues/27127
| 1,966,816,948 |
I_kwDOCUB6oc51Oz60
| 27,127 |
[docstring] Fix docstring for AltCLIPVisionConfig, AltCLIPTextConfig and AltCLIPConfig
|
{
"login": "AksharGoyal",
"id": 38995624,
"node_id": "MDQ6VXNlcjM4OTk1NjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/38995624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AksharGoyal",
"html_url": "https://github.com/AksharGoyal",
"followers_url": "https://api.github.com/users/AksharGoyal/followers",
"following_url": "https://api.github.com/users/AksharGoyal/following{/other_user}",
"gists_url": "https://api.github.com/users/AksharGoyal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AksharGoyal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AksharGoyal/subscriptions",
"organizations_url": "https://api.github.com/users/AksharGoyal/orgs",
"repos_url": "https://api.github.com/users/AksharGoyal/repos",
"events_url": "https://api.github.com/users/AksharGoyal/events{/privacy}",
"received_events_url": "https://api.github.com/users/AksharGoyal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
CONTRIBUTOR
| null |
I will take AltCLIPVisionConfig, AltCLIPTextConfig
_Originally posted by @AksharGoyal in https://github.com/huggingface/transformers/issues/26638#issuecomment-1769186608_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27127/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27125/events
|
https://github.com/huggingface/transformers/pull/27125
| 1,966,701,398 |
PR_kwDOCUB6oc5eCMNs
| 27,125 |
[`FA2`/ `Mistral`] Revert previous behavior with right padding + forward
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/26912#issuecomment-1783842424
As stated in that comment, https://github.com/huggingface/transformers/pull/27086 has mistakenly reverted https://github.com/huggingface/transformers/pull/26912 - this PR simply reverts it back
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27125/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27125",
"html_url": "https://github.com/huggingface/transformers/pull/27125",
"diff_url": "https://github.com/huggingface/transformers/pull/27125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27125.patch",
"merged_at": 1698660290000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27124/events
|
https://github.com/huggingface/transformers/pull/27124
| 1,966,698,748 |
PR_kwDOCUB6oc5eCLs4
| 27,124 |
[`core`/ `GC` / `tests`] Stronger GC tests
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"On the commit shared above, this new test led to the same failures:\r\n\r\n```bash\r\nFAILED tests/models/autoformer/test_modeling_autoformer.py::AutoformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.value_embedding.value_projection.weight in AutoformerForPrediction has no gradient!\r\nFAILED tests/models/beit/test_modeling_beit.py::BeitModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation\r\nFAILED tests/models/big_bird/test_modeling_big_bird.py::BigBirdModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : bert.pooler.weight in BigBirdForMaskedLM has no gradient!\r\nFAILED tests/models/canine/test_modeling_canine.py::CanineModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : canine.projection.conv.weight in CanineForMultipleChoice has no gradient!\r\nFAILED tests/models/clipseg/test_modeling_clipseg.py::CLIPSegModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : clip.logit_scale in CLIPSegForImageSegmentation has no gradient!\r\nFAILED tests/models/data2vec/test_modeling_data2vec_vision.py::Data2VecVisionModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation\r\nFAILED tests/models/dinov2/test_modeling_dinov2.py::Dinov2ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : dinov2.embeddings.mask_token in Dinov2ForImageClassification has no gradient!\r\nFAILED tests/models/dpt/test_modeling_dpt_hybrid.py::DPTModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation\r\nFAILED tests/models/flava/test_modeling_flava.py::FlavaForPreTrainingTest::test_training_gradient_checkpointing - AssertionError: False is not true : flava.text_model.pooler.dense.weight in FlavaForPreTraining has no gradient!\r\nFAILED tests/models/fnet/test_modeling_fnet.py::FNetModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : fnet.pooler.dense.weight in FNetForMaskedLM has no gradient!\r\nFAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : multiple_choice_head.summary.weight in GPT2DoubleHeadsModel has no gradient!\r\nFAILED tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : encoder.graph_encoder.graph_attn_bias.edge_encoder.weight in GraphormerForGraphClassification has no gradient!\r\nFAILED tests/models/imagegpt/test_modeling_imagegpt.py::ImageGPTModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : transformer.h.0.ln_1.weight in ImageGPTForCausalImageModeling has no gradient!\r\nFAILED tests/models/informer/test_modeling_informer.py::InformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight in InformerForPrediction has no gradient!\r\nFAILED tests/models/layoutlm/test_modeling_layoutlm.py::LayoutLMModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : layoutlm.pooler.dense.weight in LayoutLMForMaskedLM has no gradient!\r\nFAILED tests/models/lilt/test_modeling_lilt.py::LiltModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : lilt.embeddings.word_embeddings.weight in LiltForSequenceClassification has no gradient!\r\nFAILED tests/models/luke/test_modeling_luke.py::LukeModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : luke.pooler.dense.weight in LukeForMaskedLM has no gradient!\r\nFAILED tests/models/marian/test_modeling_marian.py::MarianModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight in MarianMTModel has no gradient!\r\nFAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_training_gradient_checkpointing - ValueError: sequence length must be divisible by the block_size.\r\nFAILED tests/models/pegasus/test_modeling_pegasus.py::PegasusModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight in PegasusForConditionalGeneration has no gradient!\r\nFAILED tests/models/roformer/test_modeling_roformer.py::RoFormerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : roformer.encoder.embed_positions.weight in RoFormerForMaskedLM has no gradient!\r\nFAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithSpeechInputTest::test_training_gradient_checkpointing - AssertionError: False is not true : speech_encoder.encoder.layers.0.ffn1_layer_norm.weight in SeamlessM4TForSpeechToSpeech has no gradient!\r\nFAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_training_gradient_checkpointing - AssertionError: False is not true : t2u_model.model.encoder.layers.0.self_attn.k_proj.weight in SeamlessM4TForTextToSpeech has no gradient!\r\nFAILED tests/models/time_series_transformer/test_modeling_time_series_transformer.py::TimeSeriesTransformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight in TimeSeriesTransformerForPrediction has no gradient!\r\nFAILED tests/models/umt5/test_modeling_umt5.py::UMT5ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : encoder.block.0.layer.0.SelfAttention.q.weight in UMT5ForConditionalGeneration has no gradient!\r\nFAILED tests/models/visual_bert/test_modeling_visual_bert.py::VisualBertModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : visual_bert.pooler.dense.weight in VisualBertForRegionToPhraseAlignment has no gradient!\r\n=================================================================================================================================================== 26 failed, 242 passed, 19 skipped, 42543 deselected, 160 warnings in 51.80s =========================================================================\r\n```\r\n\r\nI propose to skip these tests for the architectures where the test fails, and file an issue on PyTorch with a simple reproducer.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done! I will try to come up with a simple reproducible snippet and file an issue on PyTorch, the failing CI seems unrelated to this PR as it also fails on main. This PR is ready for another round of review!",
"Thanks for your suggestions @ydshieh , @amyeroberts , this PR is ready for another round of review!",
"Wow, thank you for the update, I wasn't aware we have to change all files 😅 .\r\n\r\nFor `test_training_gradient_checkpointing_use_reentrant` and the other 2, maybe we can define\r\n\r\n`check_training_gradient_checkpointing(self, gradient_checkpointing_kwargs)`, and the 3 test methods just simply call that method with the desired argument value for `gradient_checkpointing_kwargs` - if that is the only difference between them. Otherwise, LGTM.",
"Sure @ydshieh , I just refactored the tests as suggested!"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
While working on the gradient checkpointing refactor I realized that GC was not strongly battle tested on our models.
I propose a simple way to check if GC has correctly done its effect by simulating a training and checking if all parameters that requires grad have a non-None gradient.
The good news is that all most used models have a proper working implementation of gradient checkpointing whereas for ~20 architectures this new test fails.
```bash
FAILED tests/models/autoformer/test_modeling_autoformer.py::AutoformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.value_embedding.value_projection.weight has no gradient!
FAILED tests/models/beit/test_modeling_beit.py::BeitModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation
FAILED tests/models/big_bird/test_modeling_big_bird.py::BigBirdModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : bert.pooler.weight has no gradient!
FAILED tests/models/canine/test_modeling_canine.py::CanineModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : canine.projection.conv.weight has no gradient!
FAILED tests/models/clipseg/test_modeling_clipseg.py::CLIPSegModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : clip.logit_scale has no gradient!
FAILED tests/models/data2vec/test_modeling_data2vec_vision.py::Data2VecVisionModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation
FAILED tests/models/dinov2/test_modeling_dinov2.py::Dinov2ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : dinov2.embeddings.mask_token has no gradient!
FAILED tests/models/dpt/test_modeling_dpt_hybrid.py::DPTModelTest::test_training_gradient_checkpointing - RuntimeError: GET was unable to find an engine to execute this computation
FAILED tests/models/flava/test_modeling_flava.py::FlavaForPreTrainingTest::test_training_gradient_checkpointing - AssertionError: False is not true : flava.text_model.pooler.dense.weight has no gradient!
FAILED tests/models/fnet/test_modeling_fnet.py::FNetModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : fnet.pooler.dense.weight has no gradient!
FAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : multiple_choice_head.summary.weight has no gradient!
FAILED tests/models/imagegpt/test_modeling_imagegpt.py::ImageGPTModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : transformer.h.0.ln_1.weight has no gradient!
FAILED tests/models/informer/test_modeling_informer.py::InformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight has no gradient!
FAILED tests/models/layoutlm/test_modeling_layoutlm.py::LayoutLMModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : layoutlm.pooler.dense.weight has no gradient!
FAILED tests/models/lilt/test_modeling_lilt.py::LiltModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : lilt.embeddings.word_embeddings.weight has no gradient!
FAILED tests/models/luke/test_modeling_luke.py::LukeModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : luke.pooler.dense.weight has no gradient!
FAILED tests/models/marian/test_modeling_marian.py::MarianModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight has no gradient!
FAILED tests/models/mra/test_modeling_mra.py::MraModelTest::test_training_gradient_checkpointing - ValueError: sequence length must be divisible by the block_size.
FAILED tests/models/pegasus/test_modeling_pegasus.py::PegasusModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight has no gradient!
FAILED tests/models/roformer/test_modeling_roformer.py::RoFormerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : roformer.encoder.embed_positions.weight has no gradient!
FAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithSpeechInputTest::test_training_gradient_checkpointing - AssertionError: False is not true : speech_encoder.encoder.layers.1.ffn1_layer_norm.weight has no gradient!
FAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_training_gradient_checkpointing - AssertionError: False is not true : t2u_model.model.encoder.layers.0.self_attn.k_proj.weight has no gradient!
FAILED tests/models/time_series_transformer/test_modeling_time_series_transformer.py::TimeSeriesTransformerModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : model.encoder.embed_positions.weight has no gradient!
FAILED tests/models/umt5/test_modeling_umt5.py::UMT5ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : encoder.block.0.layer.0.SelfAttention.q.weight has no gradient!
FAILED tests/models/visual_bert/test_modeling_visual_bert.py::VisualBertModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : visual_bert.pooler.dense.weight has no gradient!
==================================================================================================================================================== 25 failed, 243 passed, 19 skipped, 42539 deselected, 66 warnings in 59.45s ===============================================
```
I will test this against this commit 9286f0ac3939a7081773fc66480f651a7d6a8404 which is the commit before the first GC refactor commit and report here. If we get the same number of failed tests, this might indicate a silent bug in PyTorch.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27124/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27124",
"html_url": "https://github.com/huggingface/transformers/pull/27124",
"diff_url": "https://github.com/huggingface/transformers/pull/27124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27124.patch",
"merged_at": 1698692026000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27123/events
|
https://github.com/huggingface/transformers/pull/27123
| 1,966,670,990 |
PR_kwDOCUB6oc5eCGOD
| 27,123 |
Update cookiecutter.json
|
{
"login": "parakh2204",
"id": 27987971,
"node_id": "MDQ6VXNlcjI3OTg3OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/27987971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parakh2204",
"html_url": "https://github.com/parakh2204",
"followers_url": "https://api.github.com/users/parakh2204/followers",
"following_url": "https://api.github.com/users/parakh2204/following{/other_user}",
"gists_url": "https://api.github.com/users/parakh2204/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parakh2204/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parakh2204/subscriptions",
"organizations_url": "https://api.github.com/users/parakh2204/orgs",
"repos_url": "https://api.github.com/users/parakh2204/repos",
"events_url": "https://api.github.com/users/parakh2204/events{/privacy}",
"received_events_url": "https://api.github.com/users/parakh2204/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @parakh2204 - thanks for opening this PR! Could you provide some more information in the PR description detailing what issue this is addressing? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27123/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27123",
"html_url": "https://github.com/huggingface/transformers/pull/27123",
"diff_url": "https://github.com/huggingface/transformers/pull/27123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27123.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27122/events
|
https://github.com/huggingface/transformers/pull/27122
| 1,966,667,769 |
PR_kwDOCUB6oc5eCFnO
| 27,122 |
translate traning.md to chinese
|
{
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27122). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu I have fixed the _tortree.yml.\r\n\r\nBest"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27122/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27122",
"html_url": "https://github.com/huggingface/transformers/pull/27122",
"diff_url": "https://github.com/huggingface/transformers/pull/27122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27122.patch",
"merged_at": 1698767858000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27121/events
|
https://github.com/huggingface/transformers/issues/27121
| 1,966,539,587 |
I_kwDOCUB6oc51NwND
| 27,121 |
Training CodeLLaMa-7b with FSDP causes loss 0 error
|
{
"login": "TomasAndersonFang",
"id": 38727343,
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomasAndersonFang",
"html_url": "https://github.com/TomasAndersonFang",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @pacman100 @muellerz ",
"@amyeroberts I'm sorry I actually solved this problem. This problem is caused by fp16 and a large learning rate. When fine-tuning LLaMA with Lora, it's ok to use them. But with full-parameter fine-tuning, it's necessary to use bf16 and a smaller learning rate (I use 5e-6, although 5e-5 is also ok but it's sometimes unstable).",
"@TomasAndersonFang thanks for replying and detailing what the issue was! "
] | 1,698 | 1,699 | 1,699 |
NONE
| null |
### System Info
```
- `transformers` version: 4.34.0
- Platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My script:
```python
# coding=utf-8
# Implements parameter-efficient or full parameters supervised fine-tuning for LLaMa model.
# This code is inspired by
# https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py and https://www.mlexpert.io/machine-learning/tutorials/alpaca-fine-tuning
import transformers
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
DataCollatorForSeq2Seq,
Trainer,
Seq2SeqTrainer,
HfArgumentParser,
Seq2SeqTrainingArguments,
BitsAndBytesConfig,
)
from peft import (
LoraConfig,
get_peft_model,
get_peft_model_state_dict,
prepare_model_for_int8_training,
prepare_model_for_kbit_training,
set_peft_model_state_dict,
)
import torch
import os
import evaluate
import functools
from datasets import load_dataset
# import bitsandbytes as bnb
import logging
import json
import copy
from typing import Dict, Optional, Sequence
from dataclasses import dataclass, field
# Lora settings
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT= 0.05
LORA_TARGET_MODULES = [
"q_proj",
"v_proj",
]
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="elinas/llama-7b-hf-transformers-4.29")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."})
num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
# cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
# adam_beta1: float = field(default=0.9)
# adam_beta2: float = field(default=0.95)
model_max_length: int = field(
default=1024,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."})
def tokenize(text, tokenizer, max_seq_len=1024, add_eos_token=True):
result = tokenizer(
text,
truncation=False,
max_length=max_seq_len,
padding=False,
return_tensors=None,
)
# If the tokenized length exceeds the max_seq_len, return None
if len(result["input_ids"]) >= max_seq_len:
return None
if (
result["input_ids"][-1] != tokenizer.eos_token_id
and len(result["input_ids"]) < max_seq_len
and add_eos_token
):
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
# if add_eos_token and len(result["input_ids"]) >= max_seq_len:
# result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id
# result["attention_mask"][max_seq_len - 1] = 1
result["labels"] = result["input_ids"].copy()
return result
def main():
parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if training_args.is_lora:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
torch_dtype=torch.float16,
trust_remote_code=True,
load_in_8bit=True,
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
),
)
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=LORA_TARGET_MODULES,
lora_dropout=LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
torch_dtype=torch.float16,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
model.config.use_cache = False
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
if training_args.is_lora:
print_trainable_parameters(model)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
model_max_length=training_args.model_max_length,
padding_side="left",
trust_remote_code=True,
use_fast=True,
)
tokenizer.pad_token = tokenizer.unk_token
# Load dataset
def generate_and_tokenize_prompt(sample):
input_text = sample["input"]
target_text = sample["output"] + tokenizer.eos_token
full_text = input_text + target_text
tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=training_args.model_max_length)
if tokenized_full_text is None:
# Return a null sample if the tokenized length exceeds the max_seq_len
return {"input_ids": [], "attention_mask": [], "labels": []}
tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=training_args.model_max_length)
input_len = len(tokenized_input_text["input_ids"]) # This a bug of llamatokenizer that it does not add eos token
tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:]
return tokenized_full_text
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.eval_file is not None:
data_files["eval"] = data_args.eval_file
dataset = load_dataset(data_args.data_path, data_files=data_files)
train_dataset = dataset["train"]
eval_dataset = dataset["eval"]
def print_dataset_length(dataset, name):
print(f"Number of samples in {name} dataset after filtering: {len(dataset)}")
train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
# Filter null samples
train_dataset = train_dataset.filter(lambda sample: len(sample["input_ids"]) > 0)
eval_dataset = eval_dataset.filter(lambda sample: len(sample["input_ids"]) > 0)
print_dataset_length(train_dataset, "train")
print_dataset_length(eval_dataset, "eval")
data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True)
# Evaluation metrics
def compute_metrics(eval_preds, tokenizer):
metric = evaluate.load('exact_match')
preds, labels = eval_preds
# In case the model returns more than the prediction logits
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Replace -100s in the labels as we can't decode them
labels[labels == -100] = tokenizer.pad_token_id
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Some simple post-processing
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [label.strip() for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
return {'exact_match': result['exact_match']}
compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer)
# Training
trainer = Trainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
data_collator=data_collator,
compute_metrics=compute_metrics_fn,
)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
tokenizer.save_pretrained(save_directory=training_args.output_dir)
if __name__ == "__main__":
main()
```
Commands used to launch the script:
```bash
accelerate launch --config_file "/proj/berzelius-2023-175/users/x_senfa/apr_ft/configs/fsdp_config.yaml" /proj/berzelius-2023-175/users/x_senfa/apr_ft/llama2_sft.py \
--model_name_or_path \
--data_path \
--output_dir \
--train_file train_data.jsonl \
--eval_file test_data.jsonl \
--is_lora False \
--model_max_length 1024 \
--cache_path \
--do_train \
--do_eval False \
--fp16 True \
--bf16 False \
--num_train_epochs 2 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--eval_steps 10 \
--save_steps 1200 \
--learning_rate 5e-4 \
--lr_scheduler_type "cosine" \
--logging_steps 10 \
```
Accelerate config
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'bf16'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Log
```
0%| | 0/11084 [00:00<?, ?it/s]You're using a CodeLlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a CodeLlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a CodeLlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a CodeLlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
0%| | 1/11084 [00:01<5:18:57, 1.73s/it]
0%| | 2/11084 [00:03<4:29:55, 1.46s/it]
0%| | 3/11084 [00:04<5:01:43, 1.63s/it]
0%| | 4/11084 [00:07<5:40:13, 1.84s/it]
0%| | 5/11084 [00:09<5:55:33, 1.93s/it]
0%| | 6/11084 [00:11<6:24:14, 2.08s/it]
0%| | 7/11084 [00:14<6:56:29, 2.26s/it]
0%| | 8/11084 [00:17<7:43:48, 2.51s/it]
0%| | 9/11084 [00:20<8:13:53, 2.68s/it]
0%| | 10/11084 [00:23<8:49:02, 2.87s/it]
{'loss': 3.8593, 'learning_rate': 0.0004999999899580808, 'epoch': 0.0}
0%| | 10/11084 [00:23<8:49:02, 2.87s/it]
0%| | 11/11084 [00:26<8:42:57, 2.83s/it]
0%| | 12/11084 [00:27<7:31:00, 2.44s/it]
0%| | 13/11084 [00:30<7:26:25, 2.42s/it]
0%| | 14/11084 [00:32<7:08:13, 2.32s/it]
0%| | 15/11084 [00:34<6:53:57, 2.24s/it]
0%| | 16/11084 [00:36<6:44:50, 2.19s/it]
0%| | 17/11084 [00:38<6:21:09, 2.07s/it]
0%| | 18/11084 [00:40<6:25:54, 2.09s/it]
0%| | 19/11084 [00:43<7:26:38, 2.42s/it]
0%| | 20/11084 [00:45<6:57:20, 2.26s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.0}
0%| | 20/11084 [00:45<6:57:20, 2.26s/it]
0%| | 21/11084 [00:48<7:25:37, 2.42s/it]
0%| | 22/11084 [00:50<7:26:41, 2.42s/it]
0%| | 23/11084 [00:53<7:55:51, 2.58s/it]
0%| | 24/11084 [00:56<8:18:13, 2.70s/it]
0%| | 25/11084 [00:58<8:03:54, 2.63s/it]
0%| | 26/11084 [01:01<7:34:18, 2.47s/it]
0%| | 27/11084 [01:03<7:44:08, 2.52s/it]
0%| | 28/11084 [01:06<7:58:40, 2.60s/it]
0%| | 29/11084 [01:09<8:11:45, 2.67s/it]
0%| | 30/11084 [01:11<7:26:06, 2.42s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.01}
0%| | 30/11084 [01:11<7:26:06, 2.42s/it]
0%| | 31/11084 [01:13<7:27:43, 2.43s/it]
0%| | 32/11084 [01:16<7:56:56, 2.59s/it]
0%| | 33/11084 [01:19<7:50:14, 2.55s/it]
0%| | 34/11084 [01:21<7:46:59, 2.54s/it]
0%| | 35/11084 [01:26<9:37:39, 3.14s/it]
0%| | 36/11084 [01:27<8:27:02, 2.75s/it]
0%| | 37/11084 [01:31<8:48:13, 2.87s/it]
0%| | 38/11084 [01:33<8:05:37, 2.64s/it]
0%| | 39/11084 [01:35<7:53:22, 2.57s/it]
0%| | 40/11084 [01:39<8:54:45, 2.91s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.01}
0%| | 40/11084 [01:39<8:54:45, 2.91s/it]
0%| | 41/11084 [01:41<8:38:49, 2.82s/it]
0%| | 42/11084 [01:43<7:15:42, 2.37s/it]
0%| | 43/11084 [01:46<7:53:19, 2.57s/it]
0%| | 44/11084 [01:49<8:11:36, 2.67s/it]
0%| | 45/11084 [01:52<8:23:12, 2.74s/it]
0%| | 46/11084 [01:54<8:18:13, 2.71s/it]
0%| | 47/11084 [01:56<7:49:05, 2.55s/it]
0%| | 48/11084 [01:58<7:11:24, 2.35s/it]
0%| | 49/11084 [02:01<7:18:50, 2.39s/it]
0%| | 50/11084 [02:03<7:17:48, 2.38s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.01}
0%| | 50/11084 [02:03<7:17:48, 2.38s/it]
0%| | 51/11084 [02:06<7:48:02, 2.55s/it]
0%| | 52/11084 [02:08<7:40:46, 2.51s/it]
0%| | 53/11084 [02:11<7:34:10, 2.47s/it]
0%| | 54/11084 [02:13<7:42:31, 2.52s/it]
0%| | 55/11084 [02:16<8:00:33, 2.61s/it]
1%| | 56/11084 [02:20<8:42:51, 2.84s/it]
1%| | 57/11084 [02:23<9:09:47, 2.99s/it]
1%| | 58/11084 [02:26<9:00:00, 2.94s/it]
1%| | 59/11084 [02:28<8:21:44, 2.73s/it]
1%| | 60/11084 [02:30<8:03:29, 2.63s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.01}
1%| | 60/11084 [02:30<8:03:29, 2.63s/it]
1%| | 61/11084 [02:33<8:15:02, 2.69s/it]
1%| | 62/11084 [02:36<8:11:34, 2.68s/it]
1%| | 63/11084 [02:38<8:04:32, 2.64s/it]
1%| | 64/11084 [02:40<6:46:28, 2.21s/it]
1%| | 65/11084 [02:43<7:35:48, 2.48s/it]
1%| | 66/11084 [02:45<7:01:26, 2.30s/it]
1%| | 67/11084 [02:47<7:11:32, 2.35s/it]
1%| | 68/11084 [02:50<7:23:52, 2.42s/it]
1%| | 69/11084 [02:54<8:48:35, 2.88s/it]
1%| | 70/11084 [02:57<9:28:29, 3.10s/it]
{'loss': 0.0, 'learning_rate': 0.0004999999899580808, 'epoch': 0.01}
```
### Expected behavior
I don't know why loss converges to 0 so quickly, so I think these may have some problems.
Additional info:
- GPU info: 4* A100 (40GB)
- I used this script to fine-tune codellama with Lora and everything is ok, and I got the expected results.
- I want to fine-tune codellama with fsdp and bf16, but I met the OOM problem although I set the batch size to 1.
My question:
- How to solve this problem?
- Does bf16 require more memory than fp16?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27121/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27120/events
|
https://github.com/huggingface/transformers/pull/27120
| 1,966,482,477 |
PR_kwDOCUB6oc5eBhbq
| 27,120 |
device agnostic fsdp testing
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ydshieh ",
"@statelesshz How these tests happen for multiple npu device? They pass or fail?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27120). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh Only one test case failed due to missing communication operator,and I will upload the test log tomorrow :-)",
"### System info\r\n```\r\n\r\n(hf_test) [root@localhost transformers]# transformers-cli env\r\nFail to import hypothesis in common_utils, tests are not derandomized\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.35.0.dev0\r\n- Platform: Linux-4.19.90-vhulk2211.3.0.h1543.eulerosv2r10.aarch64-aarch64-with-glibc2.26\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n(hf_test) [root@localhost transformers]# accelerate env\r\nFail to import hypothesis in common_utils, tests are not derandomized\r\n\r\nCopy-and-paste the text below in your GitHub issue\r\n\r\n- `Accelerate` version: 0.24.0\r\n- Platform: Linux-4.19.90-vhulk2211.3.0.h1543.eulerosv2r10.aarch64-aarch64-with-glibc2.26\r\n- Python version: 3.8.18\r\n- Numpy version: 1.24.4\r\n- PyTorch version (GPU?): 2.1.0 (False)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: True\r\n- System RAM: 755.10 GB\r\n- `Accelerate` default config:\r\n\tNot found\r\n```\r\n\r\n\r\n### test result\r\n`spec.py`\r\n```python\r\nimport torch\r\nimport torch_npu\r\n# !! Further additional imports can be added here !!\r\n# Specify the device name (eg. 'cuda', 'cpu', 'npu')\r\nDEVICE_NAME = 'npu:0'\r\n# Specify device-specific backends to dispatch to.\r\n# If not specified, will fallback to 'default' in 'testing_utils.py`\r\nMANUAL_SEED_FN = torch.npu.manual_seed_all\r\nEMPTY_CACHE_FN = torch.npu.empty_cache\r\nDEVICE_COUNT_FN = torch.npu.device_count\r\n\r\n```\r\ndo the following instructions:\r\n```\r\nRUN_SLOW=1 TRANSFORMERS_TEST_BACKEND=\"torch_npu\" TRANSFORMERS_TEST_DEVICE=\"npu:0\" TRANSFORMERS_TEST_DEVICE_SPEC=\"spec.py\" python -m pytest -v tests/fsdp/\r\n```\r\n\r\n\r\nThe output is as follows\r\n```\r\n============================================================================================================= test session starts ==============================================================================================================\r\nplatform linux -- Python 3.8.18, pytest-7.4.3, pluggy-1.3.0 -- /data/anaconda/envs/hf_test/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /data/hf_test/transformers\r\nconfigfile: setup.cfg\r\ncollected 12 items \r\n\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_full_shard_bf16 PASSED [ 8%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_full_shard_fp16 PASSED [ 16%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_shard_grad_op_bf16 PASSED [ 25%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_shard_grad_op_fp16 PASSED [ 33%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_with_cpu_offload_0_fp16 PASSED [ 41%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_basic_run_with_cpu_offload_1_bf16 PASSED [ 50%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_full_shard_bf16 PASSED [ 58%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_full_shard_fp16 PASSED [ 66%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_shard_grad_op_bf16 PASSED [ 75%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_shard_grad_op_fp16 PASSED [ 83%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_training_and_can_resume_normally_FULL_STATE_DICT PASSED [ 91%]\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_training_and_can_resume_normally_SHARDED_STATE_DICT FAILED [100%]\r\n\r\n=================================================================================================================== FAILURES ===================================================================================================================\r\n_______________________________________________________________________________ TrainerIntegrationFSDP.test_training_and_can_resume_normally_SHARDED_STATE_DICT ________________________________________________________________________________\r\n\r\na = (<test_fsdp.TrainerIntegrationFSDP testMethod=test_training_and_can_resume_normally_SHARDED_STATE_DICT>,), kw = {}\r\n\r\n @wraps(func)\r\n def standalone_func(*a, **kw):\r\n> return func(*(a + p.args), **p.kwargs, **kw)\r\n\r\n/data/anaconda/envs/hf_test/lib/python3.8/site-packages/parameterized/parameterized.py:620: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/fsdp/test_fsdp.py:209: in test_training_and_can_resume_normally\r\n logs = self.run_cmd_and_get_logs(use_accelerate, sharding_strategy, launcher, script, args, output_dir)\r\ntests/fsdp/test_fsdp.py:239: in run_cmd_and_get_logs\r\n execute_subprocess_async(cmd, env=self.get_env())\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ncmd = ['accelerate', 'launch', '--num_processes', '2', '--main_process_port', '10999', ...]\r\nenv = {'ASCEND_AICPU_PATH': '/data/hf_test/ascend-toolkit/latest', 'ASCEND_HOME_PATH': '/data/hf_test/ascend-toolkit/latest'...PP_PATH': '/data/hf_test/ascend-toolkit/latest/opp', 'ASCEND_TOOLKIT_HOME': '/data/hf_test/ascend-toolkit/latest', ...}\r\nstdin = None, timeout = 180, quiet = False, echo = True\r\n\r\n def execute_subprocess_async(cmd, env=None, stdin=None, timeout=180, quiet=False, echo=True) -> _RunOutput:\r\n loop = asyncio.get_event_loop()\r\n result = loop.run_until_complete(\r\n _stream_subprocess(cmd, env=env, stdin=stdin, timeout=timeout, quiet=quiet, echo=echo)\r\n )\r\n \r\n cmd_str = \" \".join(cmd)\r\n if result.returncode > 0:\r\n stderr = \"\\n\".join(result.stderr)\r\n> raise RuntimeError(\r\n f\"'{cmd_str}' failed with returncode {result.returncode}\\n\\n\"\r\n f\"The combined stderr from workers follows:\\n{stderr}\"\r\n )\r\nE RuntimeError: 'accelerate launch --num_processes 2 --main_process_port 10999 --use_fsdp --fsdp_auto_wrap_policy TRANSFORMER_BASED_WRAP --fsdp_state_dict_type SHARDED_STATE_DICT --fsdp_transformer_layer_cls_to_wrap BertLayer --fsdp_sharding_strategy 1 /data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path /data/hf_test/bert-base-cased --task_name mrpc --output_dir ./xxx --overwrite_output_dir --do_train --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 2 --lr_scheduler_type cosine --logging_steps 25 --save_strategy epoch --do_eval --evaluation_strategy epoch --report_to none' failed with returncode 1\r\nE \r\nE The combined stderr from workers follows:\r\nE The following values were not passed to `accelerate launch` and had defaults used instead:\r\nE \t\tMore than one GPU was found, enabling multi-GPU training.\r\nE \t\tIf this was unintended please pass in `--num_processes=1`.\r\nE \t`--num_machines` was set to a value of `1`\r\nE \t`--mixed_precision` was set to a value of `'no'`\r\nE \t`--dynamo_backend` was set to a value of `'no'`\r\nE To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\nE Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nE Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nE Overwrite dataset info from restored data version if exists.\r\nE Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nE Found cached dataset glue (/root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\r\nE Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nE [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,303 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nE [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,314 >> Model config BertConfig {\r\nE \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nE \"architectures\": [\r\nE \"BertForMaskedLM\"\r\nE ],\r\nE \"attention_probs_dropout_prob\": 0.1,\r\nE \"classifier_dropout\": null,\r\nE \"finetuning_task\": \"mrpc\",\r\nE \"gradient_checkpointing\": false,\r\nE \"hidden_act\": \"gelu\",\r\nE \"hidden_dropout_prob\": 0.1,\r\nE \"hidden_size\": 768,\r\nE \"initializer_range\": 0.02,\r\nE \"intermediate_size\": 3072,\r\nE \"layer_norm_eps\": 1e-12,\r\nE \"max_position_embeddings\": 512,\r\nE \"model_type\": \"bert\",\r\nE \"num_attention_heads\": 12,\r\nE \"num_hidden_layers\": 12,\r\nE \"pad_token_id\": 0,\r\nE \"position_embedding_type\": \"absolute\",\r\nE \"transformers_version\": \"4.35.0.dev0\",\r\nE \"type_vocab_size\": 2,\r\nE \"use_cache\": true,\r\nE \"vocab_size\": 28996\r\nE }\r\nE \r\nE [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,314 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nE [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,316 >> Model config BertConfig {\r\nE \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nE \"architectures\": [\r\nE \"BertForMaskedLM\"\r\nE ],\r\nE \"attention_probs_dropout_prob\": 0.1,\r\nE \"classifier_dropout\": null,\r\nE \"gradient_checkpointing\": false,\r\nE \"hidden_act\": \"gelu\",\r\nE \"hidden_dropout_prob\": 0.1,\r\nE \"hidden_size\": 768,\r\nE \"initializer_range\": 0.02,\r\nE \"intermediate_size\": 3072,\r\nE \"layer_norm_eps\": 1e-12,\r\nE \"max_position_embeddings\": 512,\r\nE \"model_type\": \"bert\",\r\nE \"num_attention_heads\": 12,\r\nE \"num_hidden_layers\": 12,\r\nE \"pad_token_id\": 0,\r\nE \"position_embedding_type\": \"absolute\",\r\nE \"transformers_version\": \"4.35.0.dev0\",\r\nE \"type_vocab_size\": 2,\r\nE \"use_cache\": true,\r\nE \"vocab_size\": 28996\r\nE }\r\nE \r\nE [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,316 >> loading file vocab.txt\r\nE [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,316 >> loading file tokenizer.json\r\nE [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file added_tokens.json\r\nE [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file special_tokens_map.json\r\nE [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file tokenizer_config.json\r\nE [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,317 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nE [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,318 >> Model config BertConfig {\r\nE \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nE \"architectures\": [\r\nE \"BertForMaskedLM\"\r\nE ],\r\nE \"attention_probs_dropout_prob\": 0.1,\r\nE \"classifier_dropout\": null,\r\nE \"gradient_checkpointing\": false,\r\nE \"hidden_act\": \"gelu\",\r\nE \"hidden_dropout_prob\": 0.1,\r\nE \"hidden_size\": 768,\r\nE \"initializer_range\": 0.02,\r\nE \"intermediate_size\": 3072,\r\nE \"layer_norm_eps\": 1e-12,\r\nE \"max_position_embeddings\": 512,\r\nE \"model_type\": \"bert\",\r\nE \"num_attention_heads\": 12,\r\nE \"num_hidden_layers\": 12,\r\nE \"pad_token_id\": 0,\r\nE \"position_embedding_type\": \"absolute\",\r\nE \"transformers_version\": \"4.35.0.dev0\",\r\nE \"type_vocab_size\": 2,\r\nE \"use_cache\": true,\r\nE \"vocab_size\": 28996\r\nE }\r\nE \r\nE Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nE [INFO|modeling_utils.py:3057] 2023-10-31 09:38:07,393 >> loading weights file /data/hf_test/bert-base-cased/pytorch_model.bin\r\nE [INFO|modeling_utils.py:3838] 2023-10-31 09:38:08,324 >> Some weights of the model checkpoint at /data/hf_test/bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias']\r\nE - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\nE - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nE [WARNING|modeling_utils.py:3850] 2023-10-31 09:38:08,324 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at /data/hf_test/bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\r\nE You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nE Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-f2e61c34c9899b5a.arrow\r\nE Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-fd9184904bb613ef.arrow\r\nE Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-e2ab4fdde1bba06e.arrow\r\nE [WARNING|modeling_utils.py:3850] 2023-10-31 09:38:08,625 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at /data/hf_test/bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\r\nE You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nE [INFO|trainer.py:698] 2023-10-31 09:38:12,532 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\nE [INFO|trainer.py:1674] 2023-10-31 09:38:13,434 >> ***** Running training *****\r\nE [INFO|trainer.py:1675] 2023-10-31 09:38:13,435 >> Num examples = 3,668\r\nE [INFO|trainer.py:1676] 2023-10-31 09:38:13,435 >> Num Epochs = 2\r\nE [INFO|trainer.py:1677] 2023-10-31 09:38:13,435 >> Instantaneous batch size per device = 16\r\nE [INFO|trainer.py:1680] 2023-10-31 09:38:13,435 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\nE [INFO|trainer.py:1681] 2023-10-31 09:38:13,435 >> Gradient Accumulation steps = 1\r\nE [INFO|trainer.py:1682] 2023-10-31 09:38:13,435 >> Total optimization steps = 230\r\nE [INFO|trainer.py:1683] 2023-10-31 09:38:13,436 >> Number of trainable parameters = 54,155,905\r\n 50%|█████ | 115/230 [00:14<00:13, 8.55it/s][INFO|trainer.py:698] 2023-10-31 09:38:27,965 >> The following columns in the evaluation set don't have a corresponding argument in `FullyShardedDataParallel.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `FullyShardedDataParallel.forward`, you can safely ignore this message.\r\nE [INFO|trainer.py:3093] 2023-10-31 09:38:27,969 >> ***** Running Evaluation *****\r\nE [INFO|trainer.py:3095] 2023-10-31 09:38:27,969 >> Num examples = 408\r\nE [INFO|trainer.py:3098] 2023-10-31 09:38:27,969 >> Batch size = 8\r\n 50%|█████ | 115/230 [00:15<00:13, 8.55it/[INFO|trainer.py:2816] 2023-10-31 09:38:29,156 >> Saving model checkpoint to ./xxx/checkpoint-115\r\nE [INFO|configuration_utils.py:461] 2023-10-31 09:38:29,158 >> Configuration saved in ./xxx/checkpoint-115/config.json\r\nE [INFO|modeling_utils.py:2168] 2023-10-31 09:38:29,159 >> Model weights saved in ./xxx/checkpoint-115/pytorch_model.bin\r\nE [INFO|tokenization_utils_base.py:2426] 2023-10-31 09:38:29,159 >> tokenizer config file saved in ./xxx/checkpoint-115/tokenizer_config.json\r\nE [INFO|tokenization_utils_base.py:2435] 2023-10-31 09:38:29,160 >> Special tokens file saved in ./xxx/checkpoint-115/special_tokens_map.json\r\nE /data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py:1121: UserWarning: Please use DTensor instead and we are deprecating ShardedTensor.\r\nE warnings.warn(DEPRECATE_MSG)\r\nE /data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py:1121: UserWarning: Please use DTensor instead and we are deprecating ShardedTensor.\r\nE warnings.warn(DEPRECATE_MSG)\r\nE Traceback (most recent call last):\r\nE File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 649, in <module>\r\nE main()\r\nE File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 557, in main\r\nE train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1511, in train\r\nE return inner_training_loop(\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1894, in _inner_training_loop\r\nE self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2234, in _maybe_log_save_evaluate\r\nE self._save_checkpoint(model, trial, metrics=metrics)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2291, in _save_checkpoint\r\nE self.save_model(output_dir, _internal_call=True)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2756, in save_model\r\nE save_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, self.model, output_dir)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/utils/fsdp_utils.py\", line 72, in save_fsdp_model\r\nE dist_cp.save_state_dict(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/state_dict_saver.py\", line 113, in save_state_dict\r\nE central_plan = distW.reduce_scatter(\"plan\", local_step, global_step)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 177, in reduce_scatter\r\nE all_data = self.gather_object(local_data)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 108, in gather_object\r\nE dist.gather_object(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 2509, in gather_object\r\nE Traceback (most recent call last):\r\nE File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 649, in <module>\r\nE main()\r\nE File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 557, in main\r\nE gather(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 3078, in gather\r\nE train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1511, in train\r\nE return inner_training_loop(\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1894, in _inner_training_loop\r\nE work = default_pg.gather(output_tensors, input_tensors, opts)\r\nE RuntimeError: ProcessGroupHCCL does not support gather\r\nE self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2234, in _maybe_log_save_evaluate\r\nE self._save_checkpoint(model, trial, metrics=metrics)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2291, in _save_checkpoint\r\nE self.save_model(output_dir, _internal_call=True)\r\nE File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2756, in save_model\r\nE save_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, self.model, output_dir)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/utils/fsdp_utils.py\", line 72, in save_fsdp_model\r\nE dist_cp.save_state_dict(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/state_dict_saver.py\", line 113, in save_state_dict\r\nE central_plan = distW.reduce_scatter(\"plan\", local_step, global_step)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 177, in reduce_scatter\r\nE all_data = self.gather_object(local_data)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 108, in gather_object\r\nE dist.gather_object(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 2509, in gather_object\r\nE gather(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 3078, in gather\r\nE work = default_pg.gather(output_tensors, input_tensors, opts)\r\nE RuntimeError: ProcessGroupHCCL does not support gather\r\nE /data/anaconda/envs/hf_test/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp9dfsya53'>\r\nE _warnings.warn(warn_message, ResourceWarning)\r\n 50%|█████ | 115/230 [00:17<00:17, 6.63it/s]\r\nE /data/anaconda/envs/hf_test/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpzdu72fdr'>\r\nE _warnings.warn(warn_message, ResourceWarning)\r\nE [2023-10-31 09:38:36,223] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 3079461) of binary: /data/anaconda/envs/hf_test/bin/python\r\nE Traceback (most recent call last):\r\nE File \"/data/anaconda/envs/hf_test/bin/accelerate\", line 8, in <module>\r\nE sys.exit(main())\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py\", line 47, in main\r\nE args.func(args)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/launch.py\", line 981, in launch_command\r\nE multi_gpu_launcher(args)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/launch.py\", line 654, in multi_gpu_launcher\r\nE distrib_run.run(args)\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/run.py\", line 797, in run\r\nE elastic_launch(\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\nE return launch_agent(self._config, self._entrypoint, list(args))\r\nE File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\nE raise ChildFailedError(\r\nE torch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\nE ============================================================\r\nE /data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py FAILED\r\nE ------------------------------------------------------------\r\nE Failures:\r\nE [1]:\r\nE time : 2023-10-31_09:38:36\r\nE host : localhost.localdomain\r\nE rank : 1 (local_rank: 1)\r\nE exitcode : 1 (pid: 3079463)\r\nE error_file: <N/A>\r\nE traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\nE ------------------------------------------------------------\r\nE Root Cause (first observed failure):\r\nE [0]:\r\nE time : 2023-10-31_09:38:36\r\nE host : localhost.localdomain\r\nE rank : 0 (local_rank: 0)\r\nE exitcode : 1 (pid: 3079461)\r\nE error_file: <N/A>\r\nE traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\nE ============================================================\r\n\r\nsrc/transformers/testing_utils.py:1835: RuntimeError\r\n------------------------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------------------------\r\n\r\nRunning: accelerate launch --num_processes 2 --main_process_port 10999 --use_fsdp --fsdp_auto_wrap_policy TRANSFORMER_BASED_WRAP --fsdp_state_dict_type SHARDED_STATE_DICT --fsdp_transformer_layer_cls_to_wrap BertLayer --fsdp_sharding_strategy 1 /data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path /data/hf_test/bert-base-cased --task_name mrpc --output_dir ./xxx --overwrite_output_dir --do_train --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 2 --lr_scheduler_type cosine --logging_steps 25 --save_strategy epoch --do_eval --evaluation_strategy epoch --report_to none\r\nstdout: Fail to import hypothesis in common_utils, tests are not derandomized\r\nstdout: Fail to import hypothesis in common_utils, tests are not derandomized\r\nstdout: 10/31/2023 09:38:07 - WARNING - __main__ - Process rank: 0, device: npu:0, n_gpu: 1distributed training: True, 16-bits training: False\r\nstdout: 10/31/2023 09:38:07 - INFO - __main__ - Training/evaluation parameters TrainingArguments(\r\nstdout: _n_gpu=1,\r\nstdout: adafactor=False,\r\nstdout: adam_beta1=0.9,\r\nstdout: adam_beta2=0.999,\r\nstdout: adam_epsilon=1e-08,\r\nstdout: auto_find_batch_size=False,\r\nstdout: bf16=False,\r\nstdout: bf16_full_eval=False,\r\nstdout: data_seed=None,\r\nstdout: dataloader_drop_last=False,\r\nstdout: dataloader_num_workers=0,\r\nstdout: dataloader_pin_memory=True,\r\nstdout: ddp_backend=None,\r\nstdout: ddp_broadcast_buffers=None,\r\nstdout: ddp_bucket_cap_mb=None,\r\nstdout: ddp_find_unused_parameters=None,\r\nstdout: ddp_timeout=1800,\r\nstdout: debug=[],\r\nstdout: deepspeed=None,\r\nstdout: disable_tqdm=False,\r\nstdout: dispatch_batches=None,\r\nstdout: do_eval=True,\r\nstdout: do_predict=False,\r\nstdout: do_train=True,\r\nstdout: eval_accumulation_steps=None,\r\nstdout: eval_delay=0,\r\nstdout: eval_steps=None,\r\nstdout: evaluation_strategy=epoch,\r\nstdout: fp16=False,\r\nstdout: fp16_backend=auto,\r\nstdout: fp16_full_eval=False,\r\nstdout: fp16_opt_level=O1,\r\nstdout: fsdp=[],\r\nstdout: fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nstdout: fsdp_min_num_params=0,\r\nstdout: fsdp_transformer_layer_cls_to_wrap=None,\r\nstdout: full_determinism=False,\r\nstdout: gradient_accumulation_steps=1,\r\nstdout: gradient_checkpointing=False,\r\nstdout: greater_is_better=None,\r\nstdout: group_by_length=False,\r\nstdout: half_precision_backend=auto,\r\nstdout: hub_always_push=False,\r\nstdout: hub_model_id=None,\r\nstdout: hub_private_repo=False,\r\nstdout: hub_strategy=every_save,\r\nstdout: hub_token=<HUB_TOKEN>,\r\nstdout: ignore_data_skip=False,\r\nstdout: include_inputs_for_metrics=False,\r\nstdout: include_tokens_per_second=False,\r\nstdout: jit_mode_eval=False,\r\nstdout: label_names=None,\r\nstdout: label_smoothing_factor=0.0,\r\nstdout: learning_rate=5e-05,\r\nstdout: length_column_name=length,\r\nstdout: load_best_model_at_end=False,\r\nstdout: local_rank=0,\r\nstdout: log_level=passive,\r\nstdout: log_level_replica=warning,\r\nstdout: log_on_each_node=True,\r\nstdout: logging_dir=./xxx/runs/Oct31_09-37-58_localhost.localdomain,\r\nstdout: logging_first_step=False,\r\nstdout: logging_nan_inf_filter=True,\r\nstdout: logging_steps=25,\r\nstdout: logging_strategy=steps,\r\nstdout: lr_scheduler_type=cosine,\r\nstdout: max_grad_norm=1.0,\r\nstdout: max_steps=-1,\r\nstdout: metric_for_best_model=None,\r\nstdout: mp_parameters=,\r\nstdout: no_cuda=False,\r\nstdout: num_train_epochs=2.0,\r\nstdout: optim=adamw_torch,\r\nstdout: optim_args=None,\r\nstdout: output_dir=./xxx,\r\nstdout: overwrite_output_dir=True,\r\nstdout: past_index=-1,\r\nstdout: per_device_eval_batch_size=8,\r\nstdout: per_device_train_batch_size=16,\r\nstdout: prediction_loss_only=False,\r\nstdout: push_to_hub=False,\r\nstdout: push_to_hub_model_id=None,\r\nstdout: push_to_hub_organization=None,\r\nstdout: push_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nstdout: ray_scope=last,\r\nstdout: remove_unused_columns=True,\r\nstdout: report_to=[],\r\nstdout: resume_from_checkpoint=None,\r\nstdout: run_name=./xxx,\r\nstdout: save_on_each_node=False,\r\nstdout: save_safetensors=False,\r\nstdout: save_steps=500,\r\nstdout: save_strategy=epoch,\r\nstdout: save_total_limit=None,\r\nstdout: seed=42,\r\nstdout: skip_memory_metrics=True,\r\nstdout: tf32=None,\r\nstdout: torch_compile=False,\r\nstdout: torch_compile_backend=None,\r\nstdout: torch_compile_mode=None,\r\nstdout: torchdynamo=None,\r\nstdout: tpu_metrics_debug=False,\r\nstdout: tpu_num_cores=None,\r\nstdout: use_cpu=False,\r\nstdout: use_ipex=False,\r\nstdout: use_legacy_prediction_loop=False,\r\nstdout: use_mps_device=False,\r\nstdout: warmup_ratio=0.0,\r\nstdout: warmup_steps=0,\r\nstdout: weight_decay=0.0,\r\nstdout: )\r\nstdout: 10/31/2023 09:38:07 - WARNING - datasets.load - Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nstdout: 10/31/2023 09:38:07 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstdout: 10/31/2023 09:38:07 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.\r\nstdout: 10/31/2023 09:38:07 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstdout: 10/31/2023 09:38:07 - INFO - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\r\nstdout: 10/31/2023 09:38:07 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstdout: 10/31/2023 09:38:07 - WARNING - __main__ - Process rank: 1, device: npu:1, n_gpu: 1distributed training: True, 16-bits training: False\r\nstdout: 10/31/2023 09:38:07 - WARNING - datasets.load - Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nstdout: Warning: since the loaded file is not a zipfile, only \"torch.device\" and \"str\" type parameters are currently supported for parameter types of map_locationIf parameter types of map_location is \"Callable[[torch.Tensor, str], torch.Tensor]\" or \"Dict[str, str]\", which is only support for zipfile,all tensors are currently loaded onto the CPU, which may introduce problems\r\nstdout: Warning: since the loaded file is not a zipfile, only \"torch.device\" and \"str\" type parameters are currently supported for parameter types of map_locationIf parameter types of map_location is \"Callable[[torch.Tensor, str], torch.Tensor]\" or \"Dict[str, str]\", which is only support for zipfile,all tensors are currently loaded onto the CPU, which may introduce problems\r\nstdout: 10/31/2023 09:38:08 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-f2e61c34c9899b5a.arrow\r\nstdout: 10/31/2023 09:38:08 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-fd9184904bb613ef.arrow\r\nstdout: 10/31/2023 09:38:08 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-e2ab4fdde1bba06e.arrow\r\nstdout: 10/31/2023 09:38:10 - INFO - __main__ - Sample 2619 of the training set: {'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .', 'label': 1, 'idx': 2916, 'input_ids': [101, 1109, 10830, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 3081, 5097, 1104, 4961, 1149, 13260, 9966, 1222, 1140, 119, 102, 20661, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 170, 3081, 118, 3674, 21100, 2998, 1106, 1103, 2175, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.\r\nstdout: 10/31/2023 09:38:10 - INFO - __main__ - Sample 456 of the training set: {'sentence1': \"Chechen officials working for the Moscow-backed government are a frequent target for rebels and tension is running high ahead of next Sunday 's presidential election in war-torn Chechnya .\", 'sentence2': \"Officials in Chechnya 's Moscow-backed government are a frequent target for rebels , and tension is running high ahead of Sunday 's presidential election in the war-ravaged region .\", 'label': 1, 'idx': 509, 'input_ids': [101, 20394, 11252, 1424, 3878, 1684, 1111, 1103, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 1397, 3625, 112, 188, 5200, 1728, 1107, 1594, 118, 7820, 20394, 11252, 15449, 119, 102, 9018, 1116, 1107, 20394, 11252, 15449, 112, 188, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 117, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 3625, 112, 188, 5200, 1728, 1107, 1103, 1594, 118, 187, 15677, 3660, 1805, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.\r\nstdout: 10/31/2023 09:38:10 - INFO - __main__ - Sample 102 of the training set: {'sentence1': \"Standard & Poor 's 500 stock index futures declined 4.40 points to 983.50 , while Nasdaq futures fell 6.5 points to 1,206.50 .\", 'sentence2': \"The Standard & Poor 's 500 Index was up 1.75 points , or 0.18 percent , to 977.68 .\", 'label': 0, 'idx': 116, 'input_ids': [101, 6433, 111, 11767, 112, 188, 2260, 4482, 7448, 2174, 1116, 5799, 125, 119, 1969, 1827, 1106, 5103, 1495, 119, 1851, 117, 1229, 11896, 1116, 1810, 4426, 2174, 1116, 2204, 127, 119, 126, 1827, 1106, 122, 117, 20278, 119, 1851, 119, 102, 1109, 6433, 111, 11767, 112, 188, 2260, 10146, 1108, 1146, 122, 119, 3453, 1827, 117, 1137, 121, 119, 1407, 3029, 117, 1106, 5311, 1559, 119, 5599, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.\r\nstdout: 10/31/2023 09:38:12 - WARNING - evaluate.loading - Using the latest cached version of the module from /root/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-metric--glue/05234ba7acc44554edcca0978db5fa3bc600eeee66229abe79ff9887eacaf3ed (last modified on Fri Oct 27 14:19:02 2023) since it couldn't be found locally at evaluate-metric--glue, or remotely on the Hugging Face Hub.\r\nstdout: 10/31/2023 09:38:12 - WARNING - accelerate.utils.other - Detected kernel version 4.19.90, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\r\nstdout: 10/31/2023 09:38:12 - WARNING - evaluate.loading - Using the latest cached version of the module from /root/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-metric--glue/05234ba7acc44554edcca0978db5fa3bc600eeee66229abe79ff9887eacaf3ed (last modified on Fri Oct 27 14:19:02 2023) since it couldn't be found locally at evaluate-metric--glue, or remotely on the Hugging Face Hub.\r\nstdout: {'loss': 0.6273, 'learning_rate': 4.855652305297052e-05, 'epoch': 0.22}\r\nstdout: {'loss': 0.6318, 'learning_rate': 4.43927822676105e-05, 'epoch': 0.43}\r\nstdout: {'loss': 0.6359, 'learning_rate': 3.798959875088584e-05, 'epoch': 0.65}\r\nstdout: {'loss': 0.6028, 'learning_rate': 3.008640032631585e-05, 'epoch': 0.87}\r\nstdout: {'eval_loss': 0.5785336494445801, 'eval_accuracy': 0.7058823529411765, 'eval_f1': 0.8219584569732937, 'eval_combined_score': 0.7639204049572351, 'eval_runtime': 1.1857, 'eval_samples_per_second': 344.089, 'eval_steps_per_second': 21.927, 'epoch': 1.0}\r\nstdout: Fail to import hypothesis in common_utils, tests are not derandomized\r\n------------------------------------------------------------------------------------------------------------- Captured stderr call -------------------------------------------------------------------------------------------------------------\r\nstderr: The following values were not passed to `accelerate launch` and had defaults used instead:\r\nstderr: \t\tMore than one GPU was found, enabling multi-GPU training.\r\nstderr: \t\tIf this was unintended please pass in `--num_processes=1`.\r\nstderr: \t`--num_machines` was set to a value of `1`\r\nstderr: \t`--mixed_precision` was set to a value of `'no'`\r\nstderr: \t`--dynamo_backend` was set to a value of `'no'`\r\nstderr: To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\nstderr: Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nstderr: Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstderr: Overwrite dataset info from restored data version if exists.\r\nstderr: Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstderr: Found cached dataset glue (/root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\r\nstderr: Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\nstderr: [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,303 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nstderr: [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,314 >> Model config BertConfig {\r\nstderr: \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nstderr: \"architectures\": [\r\nstderr: \"BertForMaskedLM\"\r\nstderr: ],\r\nstderr: \"attention_probs_dropout_prob\": 0.1,\r\nstderr: \"classifier_dropout\": null,\r\nstderr: \"finetuning_task\": \"mrpc\",\r\nstderr: \"gradient_checkpointing\": false,\r\nstderr: \"hidden_act\": \"gelu\",\r\nstderr: \"hidden_dropout_prob\": 0.1,\r\nstderr: \"hidden_size\": 768,\r\nstderr: \"initializer_range\": 0.02,\r\nstderr: \"intermediate_size\": 3072,\r\nstderr: \"layer_norm_eps\": 1e-12,\r\nstderr: \"max_position_embeddings\": 512,\r\nstderr: \"model_type\": \"bert\",\r\nstderr: \"num_attention_heads\": 12,\r\nstderr: \"num_hidden_layers\": 12,\r\nstderr: \"pad_token_id\": 0,\r\nstderr: \"position_embedding_type\": \"absolute\",\r\nstderr: \"transformers_version\": \"4.35.0.dev0\",\r\nstderr: \"type_vocab_size\": 2,\r\nstderr: \"use_cache\": true,\r\nstderr: \"vocab_size\": 28996\r\nstderr: }\r\nstderr: \r\nstderr: [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,314 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nstderr: [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,316 >> Model config BertConfig {\r\nstderr: \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nstderr: \"architectures\": [\r\nstderr: \"BertForMaskedLM\"\r\nstderr: ],\r\nstderr: \"attention_probs_dropout_prob\": 0.1,\r\nstderr: \"classifier_dropout\": null,\r\nstderr: \"gradient_checkpointing\": false,\r\nstderr: \"hidden_act\": \"gelu\",\r\nstderr: \"hidden_dropout_prob\": 0.1,\r\nstderr: \"hidden_size\": 768,\r\nstderr: \"initializer_range\": 0.02,\r\nstderr: \"intermediate_size\": 3072,\r\nstderr: \"layer_norm_eps\": 1e-12,\r\nstderr: \"max_position_embeddings\": 512,\r\nstderr: \"model_type\": \"bert\",\r\nstderr: \"num_attention_heads\": 12,\r\nstderr: \"num_hidden_layers\": 12,\r\nstderr: \"pad_token_id\": 0,\r\nstderr: \"position_embedding_type\": \"absolute\",\r\nstderr: \"transformers_version\": \"4.35.0.dev0\",\r\nstderr: \"type_vocab_size\": 2,\r\nstderr: \"use_cache\": true,\r\nstderr: \"vocab_size\": 28996\r\nstderr: }\r\nstderr: \r\nstderr: [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,316 >> loading file vocab.txt\r\nstderr: [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,316 >> loading file tokenizer.json\r\nstderr: [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file added_tokens.json\r\nstderr: [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file special_tokens_map.json\r\nstderr: [INFO|tokenization_utils_base.py:2019] 2023-10-31 09:38:07,317 >> loading file tokenizer_config.json\r\nstderr: [INFO|configuration_utils.py:714] 2023-10-31 09:38:07,317 >> loading configuration file /data/hf_test/bert-base-cased/config.json\r\nstderr: [INFO|configuration_utils.py:776] 2023-10-31 09:38:07,318 >> Model config BertConfig {\r\nstderr: \"_name_or_path\": \"/data/hf_test/bert-base-cased\",\r\nstderr: \"architectures\": [\r\nstderr: \"BertForMaskedLM\"\r\nstderr: ],\r\nstderr: \"attention_probs_dropout_prob\": 0.1,\r\nstderr: \"classifier_dropout\": null,\r\nstderr: \"gradient_checkpointing\": false,\r\nstderr: \"hidden_act\": \"gelu\",\r\nstderr: \"hidden_dropout_prob\": 0.1,\r\nstderr: \"hidden_size\": 768,\r\nstderr: \"initializer_range\": 0.02,\r\nstderr: \"intermediate_size\": 3072,\r\nstderr: \"layer_norm_eps\": 1e-12,\r\nstderr: \"max_position_embeddings\": 512,\r\nstderr: \"model_type\": \"bert\",\r\nstderr: \"num_attention_heads\": 12,\r\nstderr: \"num_hidden_layers\": 12,\r\nstderr: \"pad_token_id\": 0,\r\nstderr: \"position_embedding_type\": \"absolute\",\r\nstderr: \"transformers_version\": \"4.35.0.dev0\",\r\nstderr: \"type_vocab_size\": 2,\r\nstderr: \"use_cache\": true,\r\nstderr: \"vocab_size\": 28996\r\nstderr: }\r\nstderr: \r\nstderr: Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad (last modified on Thu Oct 26 17:35:09 2023) since it couldn't be found locally at glue., or remotely on the Hugging Face Hub.\r\nstderr: [INFO|modeling_utils.py:3057] 2023-10-31 09:38:07,393 >> loading weights file /data/hf_test/bert-base-cased/pytorch_model.bin\r\nstderr: [INFO|modeling_utils.py:3838] 2023-10-31 09:38:08,324 >> Some weights of the model checkpoint at /data/hf_test/bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias']\r\nstderr: - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\nstderr: - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nstderr: [WARNING|modeling_utils.py:3850] 2023-10-31 09:38:08,324 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at /data/hf_test/bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\r\nstderr: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nstderr: Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-f2e61c34c9899b5a.arrow\r\nstderr: Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-fd9184904bb613ef.arrow\r\nstderr: Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-e2ab4fdde1bba06e.arrow\r\nstderr: [WARNING|modeling_utils.py:3850] 2023-10-31 09:38:08,625 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at /data/hf_test/bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\r\nstderr: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nstderr: [INFO|trainer.py:698] 2023-10-31 09:38:12,532 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\nstderr: [INFO|trainer.py:1674] 2023-10-31 09:38:13,434 >> ***** Running training *****\r\nstderr: [INFO|trainer.py:1675] 2023-10-31 09:38:13,435 >> Num examples = 3,668\r\nstderr: [INFO|trainer.py:1676] 2023-10-31 09:38:13,435 >> Num Epochs = 2\r\nstderr: [INFO|trainer.py:1677] 2023-10-31 09:38:13,435 >> Instantaneous batch size per device = 16\r\nstderr: [INFO|trainer.py:1680] 2023-10-31 09:38:13,435 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\nstderr: [INFO|trainer.py:1681] 2023-10-31 09:38:13,435 >> Gradient Accumulation steps = 1\r\nstderr: [INFO|trainer.py:1682] 2023-10-31 09:38:13,435 >> Total optimization steps = 230\r\nstderr: [INFO|trainer.py:1683] 2023-10-31 09:38:13,436 >> Number of trainable parameters = 54,155,905\r\n 50%|█████ | 115/230 [00:14<00:13, 8.55it/s][INFO|trainer.py:698] 2023-10-31 09:38:27,965 >> The following columns in the evaluation set don't have a corresponding argument in `FullyShardedDataParallel.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `FullyShardedDataParallel.forward`, you can safely ignore this message.\r\nstderr: [INFO|trainer.py:3093] 2023-10-31 09:38:27,969 >> ***** Running Evaluation *****\r\nstderr: [INFO|trainer.py:3095] 2023-10-31 09:38:27,969 >> Num examples = 408\r\nstderr: [INFO|trainer.py:3098] 2023-10-31 09:38:27,969 >> Batch size = 8\r\n 50%|█████ | 115/230 [00:15<00:13, 8.55it/[INFO|trainer.py:2816] 2023-10-31 09:38:29,156 >> Saving model checkpoint to ./xxx/checkpoint-115\r\nstderr: [INFO|configuration_utils.py:461] 2023-10-31 09:38:29,158 >> Configuration saved in ./xxx/checkpoint-115/config.json\r\nstderr: [INFO|modeling_utils.py:2168] 2023-10-31 09:38:29,159 >> Model weights saved in ./xxx/checkpoint-115/pytorch_model.bin\r\nstderr: [INFO|tokenization_utils_base.py:2426] 2023-10-31 09:38:29,159 >> tokenizer config file saved in ./xxx/checkpoint-115/tokenizer_config.json\r\nstderr: [INFO|tokenization_utils_base.py:2435] 2023-10-31 09:38:29,160 >> Special tokens file saved in ./xxx/checkpoint-115/special_tokens_map.json\r\nstderr: /data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py:1121: UserWarning: Please use DTensor instead and we are deprecating ShardedTensor.\r\nstderr: warnings.warn(DEPRECATE_MSG)\r\nstderr: /data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/_shard/sharded_tensor/api.py:1121: UserWarning: Please use DTensor instead and we are deprecating ShardedTensor.\r\nstderr: warnings.warn(DEPRECATE_MSG)\r\nstderr: Traceback (most recent call last):\r\nstderr: File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 649, in <module>\r\nstderr: main()\r\nstderr: File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 557, in main\r\nstderr: train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1511, in train\r\nstderr: return inner_training_loop(\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1894, in _inner_training_loop\r\nstderr: self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2234, in _maybe_log_save_evaluate\r\nstderr: self._save_checkpoint(model, trial, metrics=metrics)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2291, in _save_checkpoint\r\nstderr: self.save_model(output_dir, _internal_call=True)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2756, in save_model\r\nstderr: save_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, self.model, output_dir)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/utils/fsdp_utils.py\", line 72, in save_fsdp_model\r\nstderr: dist_cp.save_state_dict(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/state_dict_saver.py\", line 113, in save_state_dict\r\nstderr: central_plan = distW.reduce_scatter(\"plan\", local_step, global_step)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 177, in reduce_scatter\r\nstderr: all_data = self.gather_object(local_data)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 108, in gather_object\r\nstderr: dist.gather_object(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nstderr: return func(*args, **kwargs)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 2509, in gather_object\r\nstderr: Traceback (most recent call last):\r\nstderr: File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 649, in <module>\r\nstderr: main()\r\nstderr: File \"/data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py\", line 557, in main\r\nstderr: gather(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nstderr: return func(*args, **kwargs)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 3078, in gather\r\nstderr: train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1511, in train\r\nstderr: return inner_training_loop(\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 1894, in _inner_training_loop\r\nstderr: work = default_pg.gather(output_tensors, input_tensors, opts)\r\nstderr: RuntimeError: ProcessGroupHCCL does not support gather\r\nstderr: self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2234, in _maybe_log_save_evaluate\r\nstderr: self._save_checkpoint(model, trial, metrics=metrics)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2291, in _save_checkpoint\r\nstderr: self.save_model(output_dir, _internal_call=True)\r\nstderr: File \"/data/hf_test/transformers/src/transformers/trainer.py\", line 2756, in save_model\r\nstderr: save_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, self.model, output_dir)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/utils/fsdp_utils.py\", line 72, in save_fsdp_model\r\nstderr: dist_cp.save_state_dict(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/state_dict_saver.py\", line 113, in save_state_dict\r\nstderr: central_plan = distW.reduce_scatter(\"plan\", local_step, global_step)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 177, in reduce_scatter\r\nstderr: all_data = self.gather_object(local_data)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/checkpoint/utils.py\", line 108, in gather_object\r\nstderr: dist.gather_object(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nstderr: return func(*args, **kwargs)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 2509, in gather_object\r\nstderr: gather(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\nstderr: return func(*args, **kwargs)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 3078, in gather\r\nstderr: work = default_pg.gather(output_tensors, input_tensors, opts)\r\nstderr: RuntimeError: ProcessGroupHCCL does not support gather\r\nstderr: /data/anaconda/envs/hf_test/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp9dfsya53'>\r\nstderr: _warnings.warn(warn_message, ResourceWarning)\r\n 50%|█████ | 115/230 [00:17<00:17, 6.63it/s]\r\nstderr: /data/anaconda/envs/hf_test/lib/python3.8/tempfile.py:818: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpzdu72fdr'>\r\nstderr: _warnings.warn(warn_message, ResourceWarning)\r\nstderr: [2023-10-31 09:38:36,223] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 3079461) of binary: /data/anaconda/envs/hf_test/bin/python\r\nstderr: Traceback (most recent call last):\r\nstderr: File \"/data/anaconda/envs/hf_test/bin/accelerate\", line 8, in <module>\r\nstderr: sys.exit(main())\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py\", line 47, in main\r\nstderr: args.func(args)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/launch.py\", line 981, in launch_command\r\nstderr: multi_gpu_launcher(args)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/accelerate/commands/launch.py\", line 654, in multi_gpu_launcher\r\nstderr: distrib_run.run(args)\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/run.py\", line 797, in run\r\nstderr: elastic_launch(\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\nstderr: return launch_agent(self._config, self._entrypoint, list(args))\r\nstderr: File \"/data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\nstderr: raise ChildFailedError(\r\nstderr: torch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\nstderr: ============================================================\r\nstderr: /data/hf_test/transformers/examples/pytorch/text-classification/run_glue.py FAILED\r\nstderr: ------------------------------------------------------------\r\nstderr: Failures:\r\nstderr: [1]:\r\nstderr: time : 2023-10-31_09:38:36\r\nstderr: host : localhost.localdomain\r\nstderr: rank : 1 (local_rank: 1)\r\nstderr: exitcode : 1 (pid: 3079463)\r\nstderr: error_file: <N/A>\r\nstderr: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\nstderr: ------------------------------------------------------------\r\nstderr: Root Cause (first observed failure):\r\nstderr: [0]:\r\nstderr: time : 2023-10-31_09:38:36\r\nstderr: host : localhost.localdomain\r\nstderr: rank : 0 (local_rank: 0)\r\nstderr: exitcode : 1 (pid: 3079461)\r\nstderr: error_file: <N/A>\r\nstderr: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\nstderr: ============================================================\r\n=============================================================================================================== warnings summary ===============================================================================================================\r\n../../anaconda/envs/hf_test/lib/python3.8/site-packages/_pytest/config/__init__.py:1373\r\n /data/anaconda/envs/hf_test/lib/python3.8/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_full_shard_fp16\r\ntests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_fsdp_config_shard_grad_op_fp16\r\n /data/anaconda/envs/hf_test/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.\r\n warnings.warn(\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n=========================================================================================================== short test summary info ============================================================================================================\r\nFAILED tests/fsdp/test_fsdp.py::TrainerIntegrationFSDP::test_training_and_can_resume_normally_SHARDED_STATE_DICT - RuntimeError: 'accelerate launch --num_processes 2 --main_process_port 10999 --use_fsdp --fsdp_auto_wrap_policy TRANSFORMER_BASED_WRAP --fsdp_state_dict_type SHARDED_STATE_DICT --fsdp_transformer_layer_cls_to_wrap BertLayer --fsdp_shar...\r\n============================================================================================= 1 failed, 11 passed, 3 warnings in 777.77s (0:12:57) ============================================================================================\r\n```",
"FYI https://github.com/huggingface/transformers/pull/27120#issuecomment-1786308666 😄 @ydshieh ",
"Is this PR still under review? Please inform me if any further revisions are required :-) @ydshieh and @amyeroberts ",
"We don't expect all tests 100% run without problem on other devices. My question is just to see what is the current results. It doesn't seem bad running on NPU !\r\n\r\nLGTM but waiting @amyeroberts to give her 👍 if everything is good to her."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of https://github.com/huggingface/transformers/issues/25654#issuecomment-1783704306
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @ydshieh and @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27120/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27120",
"html_url": "https://github.com/huggingface/transformers/pull/27120",
"diff_url": "https://github.com/huggingface/transformers/pull/27120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27120.patch",
"merged_at": 1698819427000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27119/events
|
https://github.com/huggingface/transformers/pull/27119
| 1,966,440,199 |
PR_kwDOCUB6oc5eBY6E
| 27,119 |
enable Megatron-LM shard for llama2 model
|
{
"login": "frankdongms",
"id": 117946481,
"node_id": "U_kgDOBwe4cQ",
"avatar_url": "https://avatars.githubusercontent.com/u/117946481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankdongms",
"html_url": "https://github.com/frankdongms",
"followers_url": "https://api.github.com/users/frankdongms/followers",
"following_url": "https://api.github.com/users/frankdongms/following{/other_user}",
"gists_url": "https://api.github.com/users/frankdongms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankdongms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankdongms/subscriptions",
"organizations_url": "https://api.github.com/users/frankdongms/orgs",
"repos_url": "https://api.github.com/users/frankdongms/repos",
"events_url": "https://api.github.com/users/frankdongms/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankdongms/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @frankdongms, thanks for opening this PR! \r\n\r\nThis is a very cool piece of work - I'll give a full review once all tests are passing. I'm hesitant to bring in such a change as it is implemented at the moment. \r\n\r\nOne of the core ideas guiding design in the transformers library is [one file per model](https://huggingface.co/blog/transformers-design-philosophy). One reason for this is, and a lot of other decisions in transformers, is to ensure that our code is easy to understand and modify. This PR adds a whole new module, and replaces the standard `nn.Linear` layer, which most users will be familiar with, with one that requires the user to understand custom parallel handling logic. \r\n\r\nWhat I would suggest is creating a new model - MegatronLlama2 which contains all of the custom layers in one file `modeling_megatron_llama2.py`\r\n\r\ncc @pacman100 - are there any other considerations regarding this kind of implementation and Trainer? ",
"Hi, @frank-dong-ms . the parallel code is inspired by huggingface thomas branch [repo](https://github.com/huggingface/transformers/blob/thomas/dirty_bloom_tp/src/transformers/models/bloom/parallel_layers.py), and some code is copied from it. may be need to mention it in the code.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,703 | 1,703 |
NONE
| null |
# What does this PR do?
Enable Megatron-LM shard for llama2 model, for large models like llama2-70b, we could shard the model into multiple gpus.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27119/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27119",
"html_url": "https://github.com/huggingface/transformers/pull/27119",
"diff_url": "https://github.com/huggingface/transformers/pull/27119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27119.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27118/events
|
https://github.com/huggingface/transformers/issues/27118
| 1,966,399,506 |
I_kwDOCUB6oc51NOAS
| 27,118 |
nodejs
|
{
"login": "Goddard",
"id": 231351,
"node_id": "MDQ6VXNlcjIzMTM1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/231351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Goddard",
"html_url": "https://github.com/Goddard",
"followers_url": "https://api.github.com/users/Goddard/followers",
"following_url": "https://api.github.com/users/Goddard/following{/other_user}",
"gists_url": "https://api.github.com/users/Goddard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Goddard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Goddard/subscriptions",
"organizations_url": "https://api.github.com/users/Goddard/orgs",
"repos_url": "https://api.github.com/users/Goddard/repos",
"events_url": "https://api.github.com/users/Goddard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Goddard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @xenova ",
"Thanks for the ping @amyeroberts :)\n\n@Goddard This error is because `Headers` (and `fetch`) are not global variables in Node 16. Although you could use a polyfill, we recommend upgrading to Node 18+.\n\nAlso, since Node 16 is EOL, we do not plan on adding backwards compatibility support.",
"> Thanks for the ping @amyeroberts :)\r\n> \r\n> @Goddard This error is because `Headers` (and `fetch`) are not global variables in Node 16. Although you could use a polyfill, we recommend upgrading to Node 18+.\r\n> \r\n> Also, since Node 16 is EOL, we do not plan on adding backwards compatibility support.\r\n\r\nGood to know. Thank you."
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
Using npm i @xenova/transformers and basically just copying and pasting the example at https://huggingface.co/docs/transformers.js/tutorials/node results in this error
@xenova/transformers/src/utils/hub.js:188
const headers = new Headers();
I get some warnings before that [Unable to load from local path @xenova/transformers/models/Xenova/distilbert-base-uncased-distilled-squad/tokenizer.json\": \"ReferenceError: Headers is not defined\""]
I looked at that location and it doesn't look like any directory for models exist. I read the documentation and they are supposed to be downloaded automatically right?
This test was done with node 16 and permissions look fine. Ran as a user on linux.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install via npm then follow the node example - https://huggingface.co/docs/transformers.js/tutorials/node
### Expected behavior
Give a result
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27118/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27117/events
|
https://github.com/huggingface/transformers/pull/27117
| 1,966,337,272 |
PR_kwDOCUB6oc5eBC-T
| 27,117 |
added unsqueeze_dim to apply_rotary_pos_emb
|
{
"login": "ShashankMosaicML",
"id": 144760128,
"node_id": "U_kgDOCKDdQA",
"avatar_url": "https://avatars.githubusercontent.com/u/144760128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShashankMosaicML",
"html_url": "https://github.com/ShashankMosaicML",
"followers_url": "https://api.github.com/users/ShashankMosaicML/followers",
"following_url": "https://api.github.com/users/ShashankMosaicML/following{/other_user}",
"gists_url": "https://api.github.com/users/ShashankMosaicML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShashankMosaicML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShashankMosaicML/subscriptions",
"organizations_url": "https://api.github.com/users/ShashankMosaicML/orgs",
"repos_url": "https://api.github.com/users/ShashankMosaicML/repos",
"events_url": "https://api.github.com/users/ShashankMosaicML/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShashankMosaicML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thank you @gante for reviewing and providing me the steps to fix the errors! However I still see some errors after following the steps, and on clicking the details for the error, I see the following. Could you please let me know how to fix this error? Thank you!\r\n<img width=\"1386\" alt=\"Screenshot 2023-10-28 at 10 16 16 PM\" src=\"https://github.com/huggingface/transformers/assets/144760128/f8202adb-a483-4e87-b855-146af4eedad7\">\r\n",
"Hi @amyeroberts , thanks for suggesting the changes. I have incorporated those, but some quality checks are still failing. Could you take a look?\r\n<img width=\"1055\" alt=\"Screenshot 2023-10-30 at 2 10 24 PM\" src=\"https://github.com/huggingface/transformers/assets/144760128/e671134a-9c70-486c-ab76-d358aa935a54\">\r\n",
"@ShashankMosaicML running `make fixup` then committing the changes doesn't fix it?",
"@gante , I think that worked! (I wasn't running `make fixup` properly earlier 😅 )",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27117). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# Making the unsqueeze dimension parameterized in the apply_rotary_pos_emb function in modeling_llama.py
This PR introduces a new parameter to the [apply_rotary_pos_emb function](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L208) which allows the specification of the dimension along which to unsqueeze to make the cosine and sine rotary tensors broadcastable to the query and key tensors. This will make the function compatible with codebases that have different shapes for the query and key tensors without needing any back-and-forth transposing.
Fixes #26948
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link to the issue](https://github.com/huggingface/transformers/issues/26948)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante , @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27117/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27117",
"html_url": "https://github.com/huggingface/transformers/pull/27117",
"diff_url": "https://github.com/huggingface/transformers/pull/27117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27117.patch",
"merged_at": 1698848218000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27116/events
|
https://github.com/huggingface/transformers/pull/27116
| 1,966,334,618 |
PR_kwDOCUB6oc5eBCZ4
| 27,116 |
Fix data2vec-audio note about attention mask
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"LGTM as well! Thanks for the much clearer note here!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27116). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25621
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ylacombe
Sorry for the late update. I was busy with something else. Finally got some time to do this quick doc update.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27116/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27116",
"html_url": "https://github.com/huggingface/transformers/pull/27116",
"diff_url": "https://github.com/huggingface/transformers/pull/27116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27116.patch",
"merged_at": 1698663145000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27115/events
|
https://github.com/huggingface/transformers/issues/27115
| 1,966,313,969 |
I_kwDOCUB6oc51M5Hx
| 27,115 |
Stop sequence eliminated for mistral models due to `skip_special_tokens=True`
|
{
"login": "rolandoam",
"id": 49346,
"node_id": "MDQ6VXNlcjQ5MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/49346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rolandoam",
"html_url": "https://github.com/rolandoam",
"followers_url": "https://api.github.com/users/rolandoam/followers",
"following_url": "https://api.github.com/users/rolandoam/following{/other_user}",
"gists_url": "https://api.github.com/users/rolandoam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rolandoam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rolandoam/subscriptions",
"organizations_url": "https://api.github.com/users/rolandoam/orgs",
"repos_url": "https://api.github.com/users/rolandoam/repos",
"events_url": "https://api.github.com/users/rolandoam/events{/privacy}",
"received_events_url": "https://api.github.com/users/rolandoam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @rolandoam 👋 \r\n\r\nThe `pipeline` API was designed for ease of use. For full control over text generation and the tokenization process, you have the `generate` API, as commented in your code snippet :) \r\n\r\nSince there is a clear alternative, we are not considering adding support to the functionality you request for the time being.",
"thanks @gante for the input. I think you're right. I was fixated in using `pipeline` inside my custom handler but there's no reason for that. I can just call the model directly for my use case. I'll close this issue."
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
Having access to the stop sequence when using mistral models (or any model that uses the ChatML prompt template) is really useful because it can be leveraged as a way to know if the model made a full generation or it was cut. Right now, the text_generation pipeline removes the stop token in [postprocess](https://github.com/huggingface/transformers/blame/main/src/transformers/pipelines/text_generation.py#L292). Ideally, this should be a configuration that can be passed to the pipeline.
@Narsil @gante @ArthurZucker
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the following script to compare output directly out of `model.generate` vs out of the pipeline
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/OpenHermes-2-Mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
chat_template = "{% for message in messages %}<|im_start|>{{ '' + message['role'] }}\n{% if message['content'] is not none %}{{ message['content'] }}<|im_end|>\n{% endif %}{% endfor %}"
tokenizer.chat_template = chat_template
chats = [
{"role": "system", "content": "You're a useful AI assistant"},
{"role": "user", "content": "Tell me about AI"},
{"role": "assistant", "content": None }
]
prompt = tokenizer.apply_chat_template(chats, tokenize=False)
#print("\n\n*** Generate:")
#input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
#output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
#print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
trust_remote_code=True,
repetition_penalty=1.1
)
print(pipe(prompt)[0]['generated_text'])
```
If you modify `text_generation.py` and change/comment `skip_special_tokens=True` in the postprocess method, then the output of the pipeline matches the expected output, including the stop sequence.
### Expected behavior
The output for mistral models should contain the stop sequence.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27115/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27114/events
|
https://github.com/huggingface/transformers/pull/27114
| 1,965,828,764 |
PR_kwDOCUB6oc5d_UM0
| 27,114 |
[`AttentionMaskConverter`] ]Fix-mask-inf
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello! \r\n\r\nWe have pulled this PR from the main branch yesterday. We were having NaN issues with ppo_trainer and llama2-7b-chat. After investigation, we found that the NaN can be reproduced just by generating the 1st token for a batch of 4 sentences and it depends on how we form the batch (i.e. the sentences that fail depend on the size of the batch and which sentences we include in the batch). Including different sentences in the batch changes the padding structure and it seems that the moment you get padding, your risk of NaN increases. Nevertheless, we have seen also NaN with batch=1 (no padding) and float16, so it seems that padding is not the only root of the problem\r\n\r\nWe have observed that the NaN appear in the 31st layer and subsequently in the logits, not in earlier layers. The input_ids and attention mask that generate get seem correct. The example code uses bfloat16 because it seems to alleviate the issue, which is more frequent with float16.\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, BitsAndBytesConfig\r\nimport torch\r\n\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n )\r\n\r\nn_gpus = torch.cuda.device_count()\r\nmax_memory = f'{40960}MB' # TODO: what if we have more memory?\r\n\r\n\r\nsft_folder = \"/raid/general/llm/llama2/hf_versions/7b-chat-withpad/\"\r\n\r\nsft_model = AutoModelForCausalLM.from_pretrained(\r\n sft_folder,\r\n quantization_config=bnb_config,\r\n device_map=\"auto\", # dispatch efficiently the model on the available ressources\r\n max_memory = {i: max_memory for i in range(n_gpus)},\r\n torch_dtype=torch.bfloat16, \r\n) \r\ntokenizer = AutoTokenizer.from_pretrained(sft_folder, model_max_length=2048)\r\n\r\n# Depending on how we select the batch here, different sentences fail\r\nbb = tokenized_dataset['train']['input_ids'][:2]\r\nbatch_mask = [torch.ones_like(element) for element in bb]\r\ninputs = {\"input_ids\": bb, \"attention_mask\": batch_mask}\r\n\r\ntokenizer.padding_side = \"left\"\r\npadded_inputs = tokenizer.pad(\r\n inputs,\r\n padding=True,\r\n max_length=None,\r\n pad_to_multiple_of=None,\r\n return_tensors=\"pt\",\r\n)\r\n\r\ngeneration_kwargs = {\r\n \"top_k\": 0.0,\r\n \"top_p\": 0.0,\r\n \"temperature\": 1.0,\r\n \"do_sample\": True,\r\n \"pad_token_id\": tokenizer.pad_token_id,\r\n \"max_new_tokens\": 1\r\n}\r\n\r\nresponse_tensors = sft_model.generate(**padded_inputs, **generation_kwargs)\r\n```\r\n\r\n",
"Hi @toritospartan -- any chance you could reproduce the issue with an open-access model OR privately share your model with us? Otherwise, it will be very challenging for us to nail the cause :)",
"We have been able to reproduce the issue with just public data. The reason of it seems to be that, due to a lack of padding token in llama2, we added our own pad token (we added 128 tokens to keep the model efficient as warning said), thinking this token should be ignored anyway. However this seems to produce those NaN in some occasions. We checked the range of the embedding of token 0 (maybe this is the pad token Meta used even if it is not clear in their code or in the export scrip from HF?). The std of this embedding is 0 with mean 0. Our pad token embedding had the std of the _init_weights of the Transformer model (this is expected). Thing is that it is this range that seems to make llama overflow. We have generated a script that makes this happen very often via generating that weight with an exaggerated std, Clear advise on how to manage this situation will make people be less confused because a further question is (are we creating a bias in the model because of which pad token we use?)\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer\r\nimport torch\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\n# Model to load\r\nmodel_folder = \"/raid/general/llm/llama2/hf_versions/7b-chat/\"\r\n\r\n# Quantization\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.float16,\r\n )\r\n\r\n\r\n# Load model\r\nn_gpus = torch.cuda.device_count()\r\nmax_memory = f'{40960}MB'\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_folder,\r\n quantization_config=bnb_config,\r\n device_map=\"auto\", # dispatch efficiently the model on the available ressources\r\n max_memory = {i: max_memory for i in range(n_gpus)},\r\n torch_dtype=torch.float16, \r\n) \r\n# Load tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_folder, model_max_length=2048)\r\n\r\n# \r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'}, replace_additional_special_tokens=False)\r\ntokenizer.add_special_tokens({'additional_special_tokens': [f'[unused_{i}]' for i in range(0,127)]}, replace_additional_special_tokens=False)\r\ntokenizer.pad_token = '[PAD]'\r\ntokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.special_tokens_map['pad_token'])\r\n\r\nold_range = model.config.initializer_range\r\nmodel.config.initializer_range = 10000\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.config.initializer_range = old_range\r\n\r\n# Generate dataset\r\nprompts = [\r\n \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\",\r\n \"Ut enim ad minim veniam\",\r\n]\r\n\r\nprompt_df = pd.DataFrame(prompts, columns=[\"prompt\"])\r\nprompt_dataset = Dataset.from_pandas(prompt_df)\r\n\r\n# Tokenize dataset\r\ndef tokenize_dataset(dataset, tokenizer):\r\n def tokenize(sample):\r\n sample[\"input_ids\"] = tokenizer.encode(sample[\"prompt\"])\r\n return sample\r\n dataset = dataset.map(tokenize, batched=False)\r\n dataset.set_format(type=\"torch\")\r\n return dataset\r\ntokenized_dataset = tokenize_dataset(prompt_dataset, tokenizer)\r\n\r\nbatch_input_ids = tokenized_dataset['input_ids']\r\nbatch_mask = [torch.ones_like(element) for element in batch_input_ids]\r\ninputs = {\"input_ids\": batch_input_ids, \"attention_mask\": batch_mask}\r\n\r\ntokenizer.padding_side = \"left\"\r\npadded_inputs = tokenizer.pad(\r\n inputs,\r\n padding=True,\r\n max_length=None,\r\n pad_to_multiple_of=None,\r\n return_tensors=\"pt\",\r\n)\r\n\r\ngeneration_kwargs = {\r\n \"top_k\": 0.0,\r\n \"top_p\": 0.0,\r\n \"temperature\": 1.0,\r\n \"do_sample\": True,\r\n \"pad_token_id\": tokenizer.pad_token_id,\r\n \"max_new_tokens\": 1\r\n}\r\n\r\nresponse_tensors = model.generate(**padded_inputs, **generation_kwargs)\r\n```",
"(cc @ArthurZucker as I have no idea how adding extra tokens works internally :D)",
"Hey! Thanks both, when adding a new token it is recommended to initialize it's embedding to an average of all the embedding of the embedding layer! This explains it best: https://nlp.stanford.edu/~johnhew/vocab-expansion.html.\r\nWould you mind trying this! 🤗 ",
"@toritospartan, the LLaMA models are unaffected by this PR as [they do masking by hand](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L426) instead of relying on `AttentionMaskConverter.to_4d`. So do `mistral`, `gpt2`, `falcon`, `t5` and probably many others.\r\n\r\nAs an alternative solution you can do while waiting for existing models to be fixed I'd suggest adding the following after the line 425 of modeling_llama.py (before the masking):\r\n\r\n```python\r\nattn_weights = attn_weights - attn_weights.max(dim=-1, keepdim=True)[0]\r\n```",
"> @toritospartan, the LLaMA models are unaffected by this PR as [they do masking by hand](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L426) instead of relying on `AttentionMaskConverter.to_4d`. So do `mistral`, `gpt2`, `falcon`, `t5` and probably many others.\r\n> \r\n> As an alternative solution you can do while waiting for existing models to be fixed I'd suggest adding the following after the line 425 of modeling_llama.py (before the masking):\r\n> \r\n> ```python\r\n> attn_weights = attn_weights - attn_weights.max(dim=-1, keepdim=True)[0]\r\n> ```\r\n\r\n@artsobolev Hi,Since the code for modeling_llama has changed, and I'm not sure exactly where you're referring to, I've put in if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):\r\n raise ValueError(\r\n f\"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}\"\r\n )\r\n attn_weights = attn_weights + attention_mask\r\n before attn_weights = attn_weights + attention_mask and didn't get work, still got nan. I'm not sure if is the right place.",
"Llama, mistral and so on do **use** `_prepare_4d_causal_attention_mask` which uses `to_4d` if the mask is provided, `to_causal_4d` otherwise. No the nan values do not arise from the mask anymore, Llama always had instabilities, this PR fixes the ones related to attention mask overflow. Not sure what code you are looking at @artsobolev ?"
] | 1,698 | 1,702 | 1,699 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the `-inf` appearing in the padding mask from the way we create them. Adds `@dataclass` decorator to `AttentionMaskConverter` as well as en example in the doc.
The pad tokens are still attended to for some specific cases, which produce different outputs for flash attention / non flash attention. Might fix #27050, but also related to other Llama issue.
FYI @gante for visibility 😉
Basically instead of
```python
-inf, 0, 0, 0 .... 0 -inf, 0, 0, 0 .... 0
-inf, 0, 0, 0 .... 0 + -inf, -inf, 0, 0, ....
-inf, 0, 0, 0 .... 0 -inf, -inf, -inf, 0 .... 0
-inf, 0, 0, 0 .... 0 -inf, -inf, -inf, -inf .... 0
```
we just mask fill the second with the first. This way we are sure that the mask does not overflow.
Before:
```python
>>> import torch
>>> from transformers.modeling_attn_mask_utils import AttentionMaskConverter
>>> converter = AttentionMaskConverter(True)
>>> converter.to_4d(torch.tensor([[0,0,0,1,1]]), 5, 5)
tensor([[[[-3.4028e+38, -inf, -inf, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -inf, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, 0.0000e+00, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, 0.0000e+00, 0.0000e+00]]]])
```
after:
```python
>>> import torch
>>> from transformers.modeling_attn_mask_utils import AttentionMaskConverter
>>> converter = AttentionMaskConverter(True)
>>> converter.to_4d(torch.tensor([[0,0,0,1,1]]), 5, 5)
tensor([[[[-3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, 0.0000e+00, -3.4028e+38],
[-3.4028e+38, -3.4028e+38, -3.4028e+38, 0.0000e+00, 0.0000e+00]]]])
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27114/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27114/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27114",
"html_url": "https://github.com/huggingface/transformers/pull/27114",
"diff_url": "https://github.com/huggingface/transformers/pull/27114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27114.patch",
"merged_at": 1699626164000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27112/events
|
https://github.com/huggingface/transformers/pull/27112
| 1,965,745,973 |
PR_kwDOCUB6oc5d_CRB
| 27,112 |
device agnostic extended testing
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"- verified on mac studio m2 with instruction:\r\n ```\r\n RUN_SLOW=1 TRANSFORMERS_BACKEND_DEVICE=\"mps\" TRANSFORMERS_BACKEND_DEVICE_SPEC=\"spec.py\" python -m pytest -v tests/extended\r\n\r\n ```\r\n `spec.py`:\r\n ```\r\n import torch\r\n \r\n DEVICE_NAME = 'mps'\r\n \r\n # Specify device-specific backends to dispatch to.\r\n # If not specified, will fallback to 'default' in 'testing_utils.py`\r\n MANUAL_SEED_FN = torch.mps.manual_seed\r\n EMPTY_CACHE_FN = torch.mps.empty_cache\r\n ```\r\n\r\n- the output:\r\n ```\r\n ============================================================================================================================= test session starts ==============================================================================================================================\r\n platform darwin -- Python 3.10.13, pytest-7.4.3, pluggy-1.3.0 -- /opt/homebrew/Caskroom/miniconda/base/envs/hf/bin/python\r\n cachedir: .pytest_cache\r\n hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/Users/yun/github/transformers/.hypothesis/examples'))\r\n rootdir: /Users/yun/github/transformers\r\n configfile: setup.cfg\r\n plugins: hypothesis-6.88.1, dash-2.14.1, timeout-2.2.0, xdist-3.3.1\r\n collected 10 items \r\n \r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq \r\n PASSED [ 10%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_apex SKIPPED (test requires apex) [ 20%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb SKIPPED (test requires bnb) [ 30%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_ddp SKIPPED (test requires multiple accelerators) [ 40%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_dp SKIPPED (test requires multiple accelerators) [ 50%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist PASSED [ 60%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_0_base SKIPPED (test requires multiple accelerators) [ 70%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_1_low SKIPPED (test requires multiple accelerators) [ 80%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_2_high SKIPPED (test requires multiple accelerators) [ 90%]\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_3_mixed SKIPPED (test requires multiple accelerators) [100%]\r\n \r\n =============================================================================================================================== warnings summary ===============================================================================================================================\r\n ../../../../opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n \r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist\r\n /Users/yun/github/transformers/src/transformers/training_args.py:1392: FutureWarning: `--adafactor` is deprecated and will be removed in version 5 of 🤗 Transformers. Use `--optim adafactor` instead\r\n warnings.warn(\r\n \r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq\r\n /opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.10/site-packages/codecarbon/input.py:9: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n import pkg_resources\r\n \r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist\r\n /Users/yun/github/transformers/src/transformers/generation/utils.py:1473: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )\r\n warnings.warn(\r\n \r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist\r\n tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist\r\n /Users/yun/github/transformers/src/transformers/generation/utils.py:1273: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n \r\n -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n ============================================================================================================= 2 passed, 8 skipped, 7 warnings in 564.29s (0:09:24) =============================================================================================================\r\n\r\n \r\n ```"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27112/timeline
| null | true |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27112",
"html_url": "https://github.com/huggingface/transformers/pull/27112",
"diff_url": "https://github.com/huggingface/transformers/pull/27112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27112.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27111/events
|
https://github.com/huggingface/transformers/pull/27111
| 1,965,689,874 |
PR_kwDOCUB6oc5d-1_g
| 27,111 |
Add exllamav2 better
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Since @ArthurZucker is out next week, it would be great if you could review this PR @amyeroberts. I'm trying to have this PR in the next release. In this modified version, I make sure to deprecate `disable_exllama` arg in favor of `use_exllama`. ",
"Thanks for the review @amyeroberts . I've addressed all the points. LMK if something is missing ! ",
"Thanks for the deep review @amyeroberts ! I've added the input logic and simplified the link with optimum config. ",
"Thanks again @amyeroberts for iterating on this PR in such a short time ! "
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this PR do ?
This PR is a modified version of this [PR](https://github.com/huggingface/transformers/pull/26437) that make `disable_exllama` go through a deprecation cycle.
I also fixed the following test `test_device_and_dtype_assignment` that broke other tests in the CI introduced by this [PR](https://github.com/huggingface/transformers/pull/26761).
I confirm that all the tests are green
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27111/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27111",
"html_url": "https://github.com/huggingface/transformers/pull/27111",
"diff_url": "https://github.com/huggingface/transformers/pull/27111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27111.patch",
"merged_at": 1698858561000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27110/events
|
https://github.com/huggingface/transformers/issues/27110
| 1,965,621,261 |
I_kwDOCUB6oc51KQAN
| 27,110 |
attributeerror: 'dataset' object has no attribute 'cardinality'
|
{
"login": "SrikanthChellappa",
"id": 37934673,
"node_id": "MDQ6VXNlcjM3OTM0Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/37934673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SrikanthChellappa",
"html_url": "https://github.com/SrikanthChellappa",
"followers_url": "https://api.github.com/users/SrikanthChellappa/followers",
"following_url": "https://api.github.com/users/SrikanthChellappa/following{/other_user}",
"gists_url": "https://api.github.com/users/SrikanthChellappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SrikanthChellappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SrikanthChellappa/subscriptions",
"organizations_url": "https://api.github.com/users/SrikanthChellappa/orgs",
"repos_url": "https://api.github.com/users/SrikanthChellappa/repos",
"events_url": "https://api.github.com/users/SrikanthChellappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/SrikanthChellappa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, I'm sorry, we don't really support `TFTrainer` anymore! We recommend fine-tuning with Keras methods like `model.fit()` instead. Please see our notebooks on [translation](https://github.com/huggingface/notebooks/blob/main/examples/translation-tf.ipynb) or [summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb) for examples of training TF Seq2Seq models with this approach.",
"Thanks @Rocketknight1 for the update.",
"@Rocketknight1 Can you pls assist me on how to save the model with .keras or .h5 extensions to load it again. I am able to save the model as .keras buty it throws error when i load it again. Pls see the code used as below. Kindly assist\r\n\r\nI was able to save my model earlier using model.save('FlanT5-Chatbot_model.keras')\r\n\r\nWhen i tried loading the model again as below\r\nfrom tensorflow.keras.models import load_model\r\nmodel=load_model('FlanT5-Chatbot_model.keras')\r\n\r\nI am getting \"ModuleNotFoundError\" error. Error stack is given below\r\n\r\nError Stack\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\serialization_lib.py:800, in _retrieve_class_or_fn(name, registered_name, module, obj_type, full_config, custom_objects)\r\n 799 try:\r\n--> 800 mod = importlib.import_module(module)\r\n 801 except ModuleNotFoundError:\r\n\r\nFile C:\\ProgramData\\anaconda3\\Lib\\importlib\\__init__.py:126, in import_module(name, package)\r\n 125 level += 1\r\n--> 126 return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:1128, in _find_and_load_unlocked(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)\r\n\r\nFile <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:1128, in _find_and_load_unlocked(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)\r\n\r\nFile <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:1142, in _find_and_load_unlocked(name, import_)\r\n\r\nModuleNotFoundError: No module named 'transformers.models'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTypeError Traceback (most recent call last)\r\nCell In[4], line 2\r\n 1 from tensorflow.keras.models import load_model\r\n----> 2 model=load_model('FlanT5-Chatbot_model.keras')\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_api.py:254, in load_model(filepath, custom_objects, compile, safe_mode, **kwargs)\r\n 249 if kwargs:\r\n 250 raise ValueError(\r\n 251 \"The following argument(s) are not supported \"\r\n 252 f\"with the native Keras format: {list(kwargs.keys())}\"\r\n 253 )\r\n--> 254 return saving_lib.load_model(\r\n 255 filepath,\r\n 256 custom_objects=custom_objects,\r\n 257 compile=compile,\r\n 258 safe_mode=safe_mode,\r\n 259 )\r\n 261 # Legacy case.\r\n 262 return legacy_sm_saving_lib.load_model(\r\n 263 filepath, custom_objects=custom_objects, compile=compile, **kwargs\r\n 264 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:281, in load_model(filepath, custom_objects, compile, safe_mode)\r\n 278 asset_store.close()\r\n 280 except Exception as e:\r\n--> 281 raise e\r\n 282 else:\r\n 283 return model\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:246, in load_model(filepath, custom_objects, compile, safe_mode)\r\n 244 # Construct the model from the configuration file in the archive.\r\n 245 with ObjectSharingScope():\r\n--> 246 model = deserialize_keras_object(\r\n 247 config_dict, custom_objects, safe_mode=safe_mode\r\n 248 )\r\n 250 all_filenames = zf.namelist()\r\n 251 if _VARS_FNAME + \".h5\" in all_filenames:\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\serialization_lib.py:705, in deserialize_keras_object(config, custom_objects, safe_mode, **kwargs)\r\n 702 if obj is not None:\r\n 703 return obj\r\n--> 705 cls = _retrieve_class_or_fn(\r\n 706 class_name,\r\n 707 registered_name,\r\n 708 module,\r\n 709 obj_type=\"class\",\r\n 710 full_config=config,\r\n 711 custom_objects=custom_objects,\r\n 712 )\r\n 714 if isinstance(cls, types.FunctionType):\r\n 715 return cls\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\serialization_lib.py:802, in _retrieve_class_or_fn(name, registered_name, module, obj_type, full_config, custom_objects)\r\n 800 mod = importlib.import_module(module)\r\n 801 except ModuleNotFoundError:\r\n--> 802 raise TypeError(\r\n 803 f\"Could not deserialize {obj_type} '{name}' because \"\r\n 804 f\"its parent module {module} cannot be imported. \"\r\n 805 f\"Full object config: {full_config}\"\r\n 806 )\r\n 807 obj = vars(mod).get(name, None)\r\n 809 if obj is None:\r\n 810 # Special case for keras.metrics.metrics\r\n\r\nTypeError: Could not deserialize class 'TFT5ForConditionalGeneration' because its parent module transformers.models.t5.modeling_tf_t5 cannot be imported. Full object config: {'module': 'transformers.models.t5.modeling_tf_t5', 'class_name': 'TFT5ForConditionalGeneration', 'config': {'vocab_size': 32128, 'd_model': 768, 'd_kv': 64, 'd_ff': 2048, 'num_layers': 12, 'num_decoder_layers': 12, 'num_heads': 12, 'relative_attention_num_buckets': 32, 'relative_attention_max_distance': 128, 'dropout_rate': 0.1, 'classifier_dropout': 0.0, 'layer_norm_epsilon': 1e-06, 'initializer_factor': 1.0, 'feed_forward_proj': 'gated-gelu', 'use_cache': True, 'dense_act_fn': 'gelu_new', 'is_gated_act': True, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['T5ForConditionalGeneration'], 'finetuning_task': None, 'id2label': {'0': 'LABEL_0', '1': 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': 0, 'task_specific_params': {'summarization': {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}, 'translation_en_to_de': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to German: '}, 'translation_en_to_fr': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to French: '}, 'translation_en_to_ro': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to Romanian: '}}, 'problem_type': None, '_name_or_path': 'google/flan-t5-base', 'transformers_version': '4.34.1', 'model_type': 't5', 'n_positions': 512, 'output_past': True}, 'registered_name': 'TFT5ForConditionalGeneration', 'compile_config': {'optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': 1.9999999494757503e-05, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'loss': {'module': 'builtins', 'class_name': 'function', 'config': 'dummy_loss', 'registered_name': 'function'}, 'metrics': None, 'loss_weights': None, 'weighted_metrics': None, 'run_eagerly': None, 'steps_per_execution': None, 'jit_compile': None}}\r\n\r\n"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
Transformers version: 4.34.1
Tensorflow version: 2.14.0
Python version: 3.11.6
My Code snippet
from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TFTrainingArguments, TFTrainer
training_args = TFTrainingArguments(
output_dir=output_dir,
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
logging_steps=1,
max_steps=1
)
trainer = TFTrainer(
model=original_model,
args=training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation']
)
trainer.train()
Error Trace
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[52], line 1
----> 1 trainer.train()
File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\transformers\trainer_tf.py:479, in TFTrainer.train(self)
475 def train(self) -> None:
476 """
477 Train method to train the model.
478 """
--> 479 train_ds = self.get_train_tfdataset()
481 if self.args.debug:
482 tf.summary.trace_on(graph=True, profiler=True)
File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\transformers\trainer_tf.py:159, in TFTrainer.get_train_tfdataset(self)
156 raise ValueError("Trainer: training requires a train_dataset.")
158 self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps
--> 159 self.num_train_examples = self.train_dataset.cardinality().numpy()
161 if self.num_train_examples < 0:
162 raise ValueError("The training dataset must have an asserted cardinality")
AttributeError: 'Dataset' object has no attribute 'cardinality'
@gante @Rocketknight1 @ArthurZucker - Pls assist asap
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TFTrainingArguments, TFTrainer
training_args = TFTrainingArguments(
output_dir=output_dir,
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
logging_steps=1,
max_steps=1
)
trainer = TFTrainer(
model=original_model,
args=training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation']
)
trainer.train()
### Expected behavior
The training should happen but it throws error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27110/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27109/events
|
https://github.com/huggingface/transformers/pull/27109
| 1,965,540,152 |
PR_kwDOCUB6oc5d-VJG
| 27,109 |
[`Fuyu`] Replace it to `BatchFeature`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"THanks !\r\nI just copied over the logic that was in place in BLIP - https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/processing_blip.py#L129 (the typehint is wrong there BTW, it returns a `BatchFeature`) per my understanding for processors that have both text and image input uses `BatchFeature` let me know if another approach is preferred @amyeroberts ",
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada Thanks for addressing this! \r\n\r\nIf it's OK with you - can we hold off on this for a day or two? I'm currently working on refactoring the image processing and processing code for Fuyu and this will be addressed there too :) \r\n\r\nIf you look at #27007 - you'll see that there's a custom `BatchEncoding` class added (it should actually be a `BatchFeature` class because there's float tensors). This is to address the atypical data structure that the processor class is returning - lists of lists instead of tensors. This is because each sample in a minibatch can have a variable number of images. There's an internal discussion on slack asking how we should represent the input/output data to reflect this. At the moment, we can wrap with `BatchFeature` as done in this PR but I'm not certain it extends to batch sizes of more than 1. \r\n",
"If it's blocking - then we can merge this and I can rebase the changes into my working branch",
"Thanks @amyeroberts your explanation makes sense to me! I was not aware of https://github.com/huggingface/transformers/pull/27007 and it is great that this issue is being addressed there.\r\nDefinitely ok for me to wait a bit before this gets merged! I just wanted to make sure users have a consistent API for multimodal models for the next release (i.e. avoid looping over the processor outputs), perhaps if https://github.com/huggingface/transformers/pull/27007 is not ready for the release we can merge this PR first, what do you think?",
"Closing this PR as https://github.com/huggingface/transformers/pull/27007 is going to be merged"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Right now users needs to manually loop over `FuyuProcessor`'s output and apply `to` to each element. One should use `BatchFeature` from `image_processing_utils` and call `to` directly to the processed elements
Before this PR users needed to do:
```python
model_inputs = processor(text=text_prompt, images=raw_image)
for k, v in model_inputs.items():
if v.dtype != torch.long:
v = v.to(torch.float16)
model_inputs[k] = v.to("cuda")
```
To run inference on 4bit models
Now they just have to
```python
model_inputs = processor(text=text_prompt, images=raw_image, return_tensors="pt").to("cuda", torch.float16)
```
cc @ArthurZucker
Script to run the model in 4bit:
```python
import torch
from transformers import FuyuProcessor, FuyuForCausalLM
from PIL import Image
import requests
# load model and processor
model_id = "adept/fuyu-8b"
processor = FuyuProcessor.from_pretrained(model_id)
model = FuyuForCausalLM.from_pretrained(model_id, device_map="cuda:0", load_in_4bit=True)
# prepare inputs for the model
text_prompt = "Generate a coco-style caption.\n"
img_url = 'https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(text=text_prompt, images=raw_image, return_tensors="pt").to("cuda", torch.float16)
# autoregressively generate text
generation_output = model.generate(**inputs, max_new_tokens=7)
generation_text = processor.batch_decode(generation_output[:, -7:], skip_special_tokens=True)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27109/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27109",
"html_url": "https://github.com/huggingface/transformers/pull/27109",
"diff_url": "https://github.com/huggingface/transformers/pull/27109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27109.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27108/events
|
https://github.com/huggingface/transformers/issues/27108
| 1,965,501,714 |
I_kwDOCUB6oc51Jy0S
| 27,108 |
Time Series Transformer generate error when input_size = 1
|
{
"login": "ricardokleinklein",
"id": 7894859,
"node_id": "MDQ6VXNlcjc4OTQ4NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7894859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardokleinklein",
"html_url": "https://github.com/ricardokleinklein",
"followers_url": "https://api.github.com/users/ricardokleinklein/followers",
"following_url": "https://api.github.com/users/ricardokleinklein/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardokleinklein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ricardokleinklein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardokleinklein/subscriptions",
"organizations_url": "https://api.github.com/users/ricardokleinklein/orgs",
"repos_url": "https://api.github.com/users/ricardokleinklein/repos",
"events_url": "https://api.github.com/users/ricardokleinklein/events{/privacy}",
"received_events_url": "https://api.github.com/users/ricardokleinklein/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"thanks ok @ricardokleinklein having a look now",
"Hi! I've been checking the example and my particular case, and seems it was a mistake from my side after all. Closing the issue - thank you!"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
Related to #22827 (@kashif ?). In the solution proposed there as an example, when setting input_size = 1, the model crashes (log below).
```
Traceback (most recent call last):
File "/Users/ricardokleinlein/Desktop/univariate_probabilisticTransformer.py", line 182, in <module>
main()
File "/Users/ricardokleinlein/Desktop/univariate_probabilisticTransformer.py", line 89, in main
outputs = model.generate(
File "/Users/ricardokleinlein/Desktop/.venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/ricardokleinlein/Desktop/venv/lib/python3.9/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py", line 1819, in generate
print(repeated_features[:, :, k+1].shape)
IndexError: index 3 is out of bounds for dimension 2 with size 3
```
Might be there an issue with the processing of different shapes? I'm facing the exact same error this issue was opened for, but in a case in which I'd need input_size to be 1 - and I'm not quite sure about how to proceed.
Thank you for your help!
### Reproduction
```
import torch
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction
batch_size = 32
context_length = 5
prediction_length = 3
input_size = 1
num_time_features = 1
lags_sequence = [1]
config = TimeSeriesTransformerConfig(prediction_length=prediction_length,
context_length=context_length,
input_size=input_size,
lags_sequence=lags_sequence,
num_time_features=num_time_features,
num_static_categorical_features=0,
num_static_real_features=0,
num_dynamic_real_features=0,
embedding_dimension=64,
encoder_ffn_dim=32,
decoder_ffn_dim=32,
encoder_attention_heads=2,
decoder_attention_heads=2,
encoder_layers=2,
decoder_layers=2,
is_encoder_decoder=True,
activation_function="gelu",
d_model=64,
dropout=0.1,
encoder_layerdrop=0.1,
decoder_layerdrop=0.1,
attention_dropout=0.1,
activation_dropout=0.1,
num_parallel_samples=100,
init_std=0.02
)
model = TimeSeriesTransformerForPrediction(config)
# input past seq length is context_length plus largest lag value:
outputs = model.generate(past_values=torch.randn((batch_size, context_length+max(lags_sequence))),
past_time_features=torch.randn((batch_size, context_length+max(lags_sequence), num_time_features)),
past_observed_mask=torch.ones((batch_size, context_length+max(lags_sequence))),
future_time_features=torch.randn((batch_size, prediction_length, num_time_features)),
)
print(outputs.["sequences"].shape)
```
### Expected behavior
``outputs["sequences"].shape`` should be equal to ``torch.Size([32, 100, 3])``
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27108/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27107/events
|
https://github.com/huggingface/transformers/issues/27107
| 1,965,482,282 |
I_kwDOCUB6oc51JuEq
| 27,107 |
How to export a Marian model in rust ?
|
{
"login": "flutter-painter",
"id": 14161798,
"node_id": "MDQ6VXNlcjE0MTYxNzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/14161798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flutter-painter",
"html_url": "https://github.com/flutter-painter",
"followers_url": "https://api.github.com/users/flutter-painter/followers",
"following_url": "https://api.github.com/users/flutter-painter/following{/other_user}",
"gists_url": "https://api.github.com/users/flutter-painter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flutter-painter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flutter-painter/subscriptions",
"organizations_url": "https://api.github.com/users/flutter-painter/orgs",
"repos_url": "https://api.github.com/users/flutter-painter/repos",
"events_url": "https://api.github.com/users/flutter-painter/events{/privacy}",
"received_events_url": "https://api.github.com/users/flutter-painter/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"They're models from https://github.com/guillaume-be/rust-bert you can probably ask there! cc @guillaume-be for info",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
Most models based on Marian are also available in rust, such as : Helsinki-NLP/opus-mt-en-roa
Is it possible to do this using transformers ?
Did you asssit Helsinki-NLP in exporting the models to Rust ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27107/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27106/events
|
https://github.com/huggingface/transformers/issues/27106
| 1,965,256,305 |
I_kwDOCUB6oc51I25x
| 27,106 |
Torch.compile(model) causes Trainer to drop all columns of the dataset.
|
{
"login": "filbe1",
"id": 148529987,
"node_id": "U_kgDOCNpjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/148529987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/filbe1",
"html_url": "https://github.com/filbe1",
"followers_url": "https://api.github.com/users/filbe1/followers",
"following_url": "https://api.github.com/users/filbe1/following{/other_user}",
"gists_url": "https://api.github.com/users/filbe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/filbe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filbe1/subscriptions",
"organizations_url": "https://api.github.com/users/filbe1/orgs",
"repos_url": "https://api.github.com/users/filbe1/repos",
"events_url": "https://api.github.com/users/filbe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/filbe1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I am happy to take a look at this issue",
"Hello, the way to use Torch Compile with Trainer is as below. Please let us know if that solves the issue:\r\n\r\n```python\r\ntraqiner_args = TrainerArguments(\r\n torch_compile_backend=\"inductor\", # sets `torch_compile=True` if not passed.\r\n torch_compile_mode=\"reduce-overhead\", # sets `torch_compile=True` if not passed\r\n)\r\n```\r\n\r\nRefer to the docstrigns of these parameters for more information. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,705 | 1,705 |
NONE
| null |
### System Info
Hello.
I have a working model and I decided to try to speed up training by using torch.compile(). However when I pass the compiled model to the Trainer it seems like it cannot get the arguments of the forward function, so it promptly drops all columns of the dataset, reducing it's size to zero.
When I pass remove_unused_columns=False to the TrainingArguments it works just fine, and it actually passes data to the forward function correctly.
The environment:
- `transformers` version: 4.34.1
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
@muellerz @pacman100
The model is a transformer encoder architecture.
Forward function's signature: `forward(self, src: Tensor, target: Tensor = None, labels: Tensor = None, src_mask: Tensor = None) -> Tensor:`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Code sample: Sorry the full code is part of ongoing research and as such I cannot disclose it .
```python
model_opt = compile(model,mode='reduce-overhead')
pretraining_args = TrainingArguments(output_dir="pretraining", do_eval=False)
pretrainer = Trainer(
model=model_opt,
args=pretraining_args,
train_dataset=dataset['train'],
data_collator=pretrain_collator
)
pretrainer.train()
```
Error message:
```
pretrainer.train()
File "/home/filbe/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1591, in train
return inner_training_loop(
File "/home/filbe/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1870, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/filbe/.local/lib/python3.10/site-packages/accelerate/data_loader.py", line 451, in __iter__
current_batch = next(dataloader_iter)
File "/home/filbe/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/home/filbe/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 674, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/filbe/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__
batch = self.__getitem__(keys)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "/home/filbe/.local/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 776 is out of bounds for size 0
0%| | 0/300 [00:00<?, ?it/s]
```
### Expected behavior
I would have expected the compiled model to work exactly as the uncompiled one would, as that is what the Pytorch documentation suggest.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27106/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27105/events
|
https://github.com/huggingface/transformers/pull/27105
| 1,965,139,755 |
PR_kwDOCUB6oc5d887p
| 27,105 |
Provide alternative when warning on use_auth_token
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Simply added 4 words, nothing more :wink: ",
"Well, 3 reviews so I'll merged! :smile: Thanks everyone :)",
"thanks for the prompt response @Wauplin "
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
This PR updates the warning message when a user uses `use_auth_token` instead of `token`. In some cases, the deprecation warning states that `use_auth_token` is deprecated without mentioning `token` as an alternative. This PR fixes this.
Opened this issue after some users got confused they wouldn't be able to authenticate anymore (see [Forum](https://discuss.huggingface.co/t/the-use-auth-token-argument-is-deprecated-and-will-be-removed-in-v5-of-transformers/53943)). cc @radames who pinged me there.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27105/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27105/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27105",
"html_url": "https://github.com/huggingface/transformers/pull/27105",
"diff_url": "https://github.com/huggingface/transformers/pull/27105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27105.patch",
"merged_at": 1698409974000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27104/events
|
https://github.com/huggingface/transformers/pull/27104
| 1,965,107,650 |
PR_kwDOCUB6oc5d81-L
| 27,104 |
Fix docstring and type hint for resize
|
{
"login": "daniilgaltsev",
"id": 75201860,
"node_id": "MDQ6VXNlcjc1MjAxODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/75201860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniilgaltsev",
"html_url": "https://github.com/daniilgaltsev",
"followers_url": "https://api.github.com/users/daniilgaltsev/followers",
"following_url": "https://api.github.com/users/daniilgaltsev/following{/other_user}",
"gists_url": "https://api.github.com/users/daniilgaltsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniilgaltsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniilgaltsev/subscriptions",
"organizations_url": "https://api.github.com/users/daniilgaltsev/orgs",
"repos_url": "https://api.github.com/users/daniilgaltsev/repos",
"events_url": "https://api.github.com/users/daniilgaltsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniilgaltsev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27104). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Updates the type hint and the docstring for `resize` since the image transformation functions should work with numpy arrays.
Closes #26986
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@rafaelpadilla
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27104/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27104",
"html_url": "https://github.com/huggingface/transformers/pull/27104",
"diff_url": "https://github.com/huggingface/transformers/pull/27104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27104.patch",
"merged_at": 1698436210000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27103/events
|
https://github.com/huggingface/transformers/issues/27103
| 1,965,096,536 |
I_kwDOCUB6oc51IP5Y
| 27,103 |
model.generate(**inputs) breaks when inputs are batched on GPU
|
{
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Doesn't seems like the model was put on the device when you did `inputs.to(\"cuda\")` !Did you try setting `model.to(\"cuda\")` as well? ",
"model.to('cuda') resolves the issue. Thanks!"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.15.0-1050-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.18.0
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm using a generate function on inputs that I put on GPU. I'm using a nllb model.
When everything works:
1. when using a string as an input on cpu
2. when using a string as an input on gpu
3. when using a batch as an input on cpu
When it breaks:
when using a batch as an input on gpu:
Example code: Translation from English to English
```python
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-1.3B",src_lang="eng_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-1.3B")
article ='This does not work'
#works
#inputs = tokenizer([article, article, article ,article, article], return_tensors="pt")
inputs = tokenizer.batch_encode_plus([article, article, article ,article, article], return_tensors="pt").
#does not work
#inputs = tokenizer([article, article, article ,article, article], return_tensors="pt").to("cuda")
#inputs = tokenizer.batch_encode_plus([article, article, article ,article, article], return_tensors="pt").to("cuda")
translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"])
translated_text = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[:]
The error given is:
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
File ..... in <module>
translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"])
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
### Expected behavior
Inference on batched inputs that are on GPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27103/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27102/events
|
https://github.com/huggingface/transformers/pull/27102
| 1,965,068,822 |
PR_kwDOCUB6oc5d8tmS
| 27,102 |
Revert "add exllamav2 arg"
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think we can open an issue and can just do a follow up PR no?",
"Let's give @SunMarc some more time not sure if he'll get through this before next week and let's not forget about this! 🤗 ",
"OK sounds great!"
] | 1,698 | 1,698 | 1,698 |
COLLABORATOR
| null |
Reverts huggingface/transformers#26437 in order to have a proper deprecation cycle cc @SunMarc
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27102/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27102",
"html_url": "https://github.com/huggingface/transformers/pull/27102",
"diff_url": "https://github.com/huggingface/transformers/pull/27102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27102.patch",
"merged_at": 1698398586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27101/events
|
https://github.com/huggingface/transformers/issues/27101
| 1,965,067,348 |
I_kwDOCUB6oc51IIxU
| 27,101 |
Convert script for llama2 is not correct
|
{
"login": "fancyerii",
"id": 5372812,
"node_id": "MDQ6VXNlcjUzNzI4MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5372812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fancyerii",
"html_url": "https://github.com/fancyerii",
"followers_url": "https://api.github.com/users/fancyerii/followers",
"following_url": "https://api.github.com/users/fancyerii/following{/other_user}",
"gists_url": "https://api.github.com/users/fancyerii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fancyerii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fancyerii/subscriptions",
"organizations_url": "https://api.github.com/users/fancyerii/orgs",
"repos_url": "https://api.github.com/users/fancyerii/repos",
"events_url": "https://api.github.com/users/fancyerii/events{/privacy}",
"received_events_url": "https://api.github.com/users/fancyerii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, it's not a very serious issue no, the max_position embedding can be changed in the config and all converted checkpoints online are working as expected. Unless you have the need to convert them yourself and don't change it, yes generating after 2k will deteriorate but pretty minimal. \r\n\r\nPlease make sure you read the issue you linked as it has pretty much all the answers",
"Feel free to open a PR to change this in the [conversion script](https://github.com/ArthurZucker/transformers/blob/72e9bd23250811083e8b2a37fd6143779d85cc51/src/transformers/models/llama/convert_llama_weights_to_hf.py#L101-L104):\r\n```diff\r\n if base > 10000.0:\r\n max_position_embeddings = 16384\r\n else:\r\n- max_position_embeddings = 2048\r\n+ max_position_embeddings = 4096\r\n```",
"So I need just modify the config.json and it's all ok? I am converting llama2 myself.",
"Yes even if you modify after conversion it's alright ",
"thanks"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.34.1
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
TRANSFORM=`python -c "import transformers;print('/'.join(transformers.__file__.split('/')[:-1])+'/models/llama/convert_llama_weights_to_hf.py')"`
python ${TRANSFORM} --input_dir llama-2-7b-chat/ --model_size 7B --output_dir 7B-chat
```
in config.json, the max_position_embeddings is 2048(not 4096):
```
{
"architectures": [
"LlamaForCausalLM"
],
"max_position_embeddings": 2048,
"model_type": "llama",
}
```
According to [this issue](https://github.com/facebookresearch/llama/issues/359), Word/position embedding after 2k is not converted correctly.
If that't rue, it's a very serious problem.
### Expected behavior
correct convert llama2 and as [Daryl149](https://github.com/Daryl149) in that issue test ask question with correct answer's position large than 2k.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27101/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27100/events
|
https://github.com/huggingface/transformers/issues/27100
| 1,965,022,439 |
I_kwDOCUB6oc51H9zn
| 27,100 |
NotImplementedError when calling AutoTokenizer from ctransformers package
|
{
"login": "heiko-braun",
"id": 369380,
"node_id": "MDQ6VXNlcjM2OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/369380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heiko-braun",
"html_url": "https://github.com/heiko-braun",
"followers_url": "https://api.github.com/users/heiko-braun/followers",
"following_url": "https://api.github.com/users/heiko-braun/following{/other_user}",
"gists_url": "https://api.github.com/users/heiko-braun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heiko-braun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heiko-braun/subscriptions",
"organizations_url": "https://api.github.com/users/heiko-braun/orgs",
"repos_url": "https://api.github.com/users/heiko-braun/repos",
"events_url": "https://api.github.com/users/heiko-braun/events{/privacy}",
"received_events_url": "https://api.github.com/users/heiko-braun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It seems to bump off https://github.com/marella/ctransformers/blob/main/ctransformers/transformers.py#L84 into https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L367, but I don't see where it runs into the NotImplementedError",
"Maybe it's realted to https://github.com/marella/ctransformers/issues/154?",
"Closing this one in favour of https://github.com/marella/ctransformers/issues/154"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
```
Python 3.10.8
transformers==4.34.1
ctransformers==0.2.27
```
I am tinkering with `TheBloke/Mistral-7B-Instruct-v0.1-GGUF`, but when calling AutoTokenizer, I run into a `NotImplementedError `
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from ctransformers import AutoModelForCausalLM, AutoTokenizer
model_name = "TheBloke/Mistral-7B-Instruct-v0.1-GGUF"
model_file = "mistral-7b-instruct-v0.1.Q4_K_M.gguf"
model = AutoModelForCausalLM.from_pretrained(
model_name,
model_file=model_file,
model_type="mistral",
gpu_layers=0,
hf=True
)
```
When trying to instantiate a tokeniser, I run into this issue:
```
tokenizer = AutoTokenizer.from_pretrained(model)
```
Error:
```
NotImplementedError Traceback (most recent call last)
[/workspaces/codespaces-jupyter/notebooks/summarise.ipynb](https://0vudeajo4kvkl20agapvl4rui237bovpe0k8r5lq5al4to17l0hk.assets.github.dev/workspaces/codespaces-jupyter/notebooks/summarise.ipynb) Cell 4 line 2
<a href='vscode-notebook-cell://codespaces%2Bhumble-dollop-9x4wqpqp2jww/workspaces/codespaces-jupyter/notebooks/summarise.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> #Tokenizer
----> <a href='vscode-notebook-cell://codespaces%2Bhumble-dollop-9x4wqpqp2jww/workspaces/codespaces-jupyter/notebooks/summarise.ipynb#W3sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> tokenizer = AutoTokenizer.from_pretrained(model)
File [~/.python/current/lib/python3.10/site-packages/ctransformers/hub.py:268](https://0vudeajo4kvkl20agapvl4rui237bovpe0k8r5lq5al4to17l0hk.assets.github.dev/stable/f1b07bd25dfad64b0167beb15359ae573aecd2cc/out/vs/workbench/contrib/webview/browser/pre/~/.python/current/lib/python3.10/site-packages/ctransformers/hub.py:268), in AutoTokenizer.from_pretrained(cls, model)
261 if not isinstance(model, CTransformersModel):
262 raise TypeError(
263 f"Currently `AutoTokenizer.from_pretrained` only accepts a model object. Please use:\n\n"
264 " model = AutoModelForCausalLM.from_pretrained(..., hf=True)\n"
265 " tokenizer = AutoTokenizer.from_pretrained(model)"
266 )
--> 268 return CTransformersTokenizer(model._llm)
File [~/.python/current/lib/python3.10/site-packages/ctransformers/transformers.py:84](https://0vudeajo4kvkl20agapvl4rui237bovpe0k8r5lq5al4to17l0hk.assets.github.dev/stable/f1b07bd25dfad64b0167beb15359ae573aecd2cc/out/vs/workbench/contrib/webview/browser/pre/~/.python/current/lib/python3.10/site-packages/ctransformers/transformers.py:84), in CTransformersTokenizer.__init__(self, llm, **kwargs)
83 def __init__(self, llm: LLM, **kwargs):
---> 84 super().__init__(**kwargs)
85 self._llm = llm
File [~/.python/current/lib/python3.10/site-packages/transformers/tokenization_utils.py:367](https://0vudeajo4kvkl20agapvl4rui237bovpe0k8r5lq5al4to17l0hk.assets.github.dev/stable/f1b07bd25dfad64b0167beb15359ae573aecd2cc/out/vs/workbench/contrib/webview/browser/pre/~/.python/current/lib/python3.10/site-packages/transformers/tokenization_utils.py:367), in PreTrainedTokenizer.__init__(self, **kwargs)
363 super().__init__(**kwargs)
365 # 4. If some of the special tokens are not part of the vocab, we add them, at the end.
366 # the order of addition is the same as self.SPECIAL_TOKENS_ATTRIBUTES following `tokenizers`
--> 367 self._add_tokens(
...
1675 `Dict[str, int]`: The vocabulary.
1676 """
-> 1677 raise NotImplementedError()
```
### Expected behavior
Successful instatiation of a tokeniser
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27100/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27099/events
|
https://github.com/huggingface/transformers/pull/27099
| 1,964,938,241 |
PR_kwDOCUB6oc5d8RqX
| 27,099 |
[`Tokenizer Serialization`] Fix the broken serialisation
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27099). All of your documentation changes will be reflected on that endpoint.",
"Pegasus is the only slow failure I witnessed so checking this now before merging! ",
"Ok, the issue is that when we force the added tokens encoder in the slow tokenizer, the fast of course can't do this. So the eos token gets replaced at index 0 in slow but not in fast. \r\nWill update to force the default vocab to the default tokens. "
] | 1,698 | 1,702 | 1,702 |
COLLABORATOR
| null |
# What does this PR do?
Should fix some serialization issues, mostly save_pretrained with all the init kwargs, and from_pretrained with dicts.
fixes #26732
With main:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/Llemma_7b", use_fast=False)
File ~/Work/transformers/src/transformers/tokenization_utils_base.py:2253, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs)
2251 if added_tokens_map != {} and init_kwargs[key] is not None:
2252 if key != "additional_special_tokens":
-> 2253 init_kwargs[key] = added_tokens_map.get(init_kwargs[key], init_kwargs[key])
2255 init_kwargs["added_tokens_decoder"] = added_tokens_decoder
2256 # convert {'__type': 'AddedToken', 'content': '<ent>', 'lstrip': False, 'normalized': True, ...} to AddedTokens
TypeError: unhashable type: 'dict'
```
This is because the tokenizer had special tokens saved as dicts, and the call to `convert_added_tokens`. is made after this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27099/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27099",
"html_url": "https://github.com/huggingface/transformers/pull/27099",
"diff_url": "https://github.com/huggingface/transformers/pull/27099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27099.patch",
"merged_at": 1702455095000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27098/events
|
https://github.com/huggingface/transformers/pull/27098
| 1,964,933,234 |
PR_kwDOCUB6oc5d8Qnz
| 27,098 |
Translate `en/tasks` folder docs to Japanese 🇯🇵
|
{
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> I'll wrap up the rest of the reviews tomorrow!\r\n\r\ngot it! take your time till then I will also review the documents once more.",
"@stevhliu the error does not make sense to me\r\n\r\n```\r\npostcss-empty: postcss.plugin was deprecated. Migration guide:\r\nhttps://evilmartians.com/chronicles/postcss-8-plugin-migration\r\n[vite-plugin-svelte] /tmp/tmpqv6gsryu/kit/src/routes/tasks/asr/+page.svelte:308:0 </Tip> attempted to close an element that was not open\r\nfile: /tmp/tmpqv6gsryu/kit/src/routes/tasks/asr/+page.svelte:308:0\r\n 306 | href=\"/docs/transformers/pr_27098/ja/main_classes/trainer#transformers.Trainer\"\r\n 307 | >Trainer</a> を使用したモデルの微調整に慣れていない場合は、<a href=\"../training#train-with-pytorch-trainer\">ここ</a> の基本的なチュートリアルをご覧ください。</p>\r\n 308 | </Tip>\r\n ^\r\n 309 | <p>これでモデルのトレーニングを開始する準備が整いました。 <code>AutoModelForCTC</code> で Wav2Vec2 をロードします。 <code>ctc_loss_reduction</code> パラメータで適用する削減を指定します。多くの場合、デフォルトの合計ではなく平均を使用する方が適切です。</p>\r\n 310 | \r\nerror during build:\r\nParseError: </Tip> attempted to close an element that was not open\r\n at error (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/utils/error.js:56:16)\r\n at Parser.error (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/parse/index.js:143:3)\r\n at tag (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/parse/state/tag.js:138:12)\r\n at new Parser (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/parse/index.js:91:12)\r\n at parse (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/parse/index.js:268:17)\r\n at compile (file:///tmp/tmpqv6gsryu/kit/node_modules/svelte/src/compiler/compile/index.js:136:14)\r\n at compileSvelte (file:///tmp/tmpqv6gsryu/kit/node_modules/@sveltejs/vite-plugin-svelte/src/utils/compile.js:126:20)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async Object.transform (file:///tmp/tmpqv6gsryu/kit/node_modules/@sveltejs/vite-plugin-svelte/src/index.js:220:20)\r\n at async transform (file:///tmp/tmpqv6gsryu/kit/node_modules/rollup/dist/es/shared/node-entry.js:24367:16)\r\n\r\nInstalling node dependencies\r\nBuilding HTML files. This will take a while :-)\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/doc_builder_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/build.py\", line 170, in build_command\r\n subprocess.run(\r\n File \"/usr/local/lib/python3.8/subprocess.py\", line 516, in run\r\n raise CalledProcessError(retcode, process.args,\r\nsubprocess.CalledProcessError: Command '['npm', 'run', 'build']' returned non-zero exit status 1.\r\nError: Process completed with exit code 1.\r\n```\r\n\r\nlocally the doc bulider commands run without any error. I did not get any error. but this PR_workflow has this issue. ",
"> Cool, looks like all the tests are passing now! If you can just resolve the `toctree` error, then we can merge :)\r\n\r\nsure, its done!"
] | 1,698 | 1,701 | 1,701 |
CONTRIBUTOR
| null |
# What does this PR do?
translating `en` docs to `JP`.
Fixes #27097
Documentation: @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27098/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27098",
"html_url": "https://github.com/huggingface/transformers/pull/27098",
"diff_url": "https://github.com/huggingface/transformers/pull/27098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27098.patch",
"merged_at": 1701727855000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27097/events
|
https://github.com/huggingface/transformers/issues/27097
| 1,964,927,501 |
I_kwDOCUB6oc51HmoN
| 27,097 |
[i18n-jp] Translating `en/tasks` folder docs to Japanese 🇯🇵
|
{
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false | null |
[] |
[
"Is anyone working on this? I'm interested in contributing!\r\nIf no one is working on it, I would like to go from A (asr.md) to Z (zero_shot_object_detection.md) :)",
"@Yuki-Imajuku , the PR is already made for this issue, see #27098, \r\n\r\nyou can work on #27392"
] | 1,698 | 1,701 | 1,701 |
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27097/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27096/events
|
https://github.com/huggingface/transformers/issues/27096
| 1,964,919,742 |
I_kwDOCUB6oc51Hku-
| 27,096 |
Issue with Installing black in Python - needs to install seperately
|
{
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Would run `pip install -e \".[quality]\"`, I think you need the styling ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,700 | 1,700 |
CONTRIBUTOR
| null |
### System Info
I am encountering an issue when trying to install "black" after running the following commands:
```bash
pip install -e ".[docs]"
pip install git+https://github.com/huggingface/doc-builder
```
`black` should be installed with the above commands only, every time I need to install it separately.
### Who can help?

### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run
```bash
pip install -e ".[docs]"
```
2. Run
```bash
install git+https://github.com/huggingface/doc-builder
```
3. Attempt to install "black" by running
```bash
pip install black
```
### Expected behavior
I expect` black` to be installed without any issues when above commands are executed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27096/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27095/events
|
https://github.com/huggingface/transformers/issues/27095
| 1,964,905,946 |
I_kwDOCUB6oc51HhXa
| 27,095 |
Packages auto-gptq or optimum mistakenly reported as not available
|
{
"login": "stwykd",
"id": 5012027,
"node_id": "MDQ6VXNlcjUwMTIwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5012027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stwykd",
"html_url": "https://github.com/stwykd",
"followers_url": "https://api.github.com/users/stwykd/followers",
"following_url": "https://api.github.com/users/stwykd/following{/other_user}",
"gists_url": "https://api.github.com/users/stwykd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stwykd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stwykd/subscriptions",
"organizations_url": "https://api.github.com/users/stwykd/orgs",
"repos_url": "https://api.github.com/users/stwykd/repos",
"events_url": "https://api.github.com/users/stwykd/events{/privacy}",
"received_events_url": "https://api.github.com/users/stwykd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"hi @stwykd \r\nHmmm with these notebooks I used to face similar issues with `bitsandbytes` and I resolved them by simply deleting the kernel and restarting it. Make sure to put all packages at the first cell. Let me know how that goes",
"Hey. I did already tried restarting the kernel, but the error persisted.\r\nWhat also led me to think it might be `transformers`, is that I can write a cell to import those libraries and it works\r\n\r\nI've tried on a colab as well (though I understand it's a similar runtime) and I get the error",
"OK thanks, will have a look",
"seeing the same error myself, notebook was running a few weeks ago",
"Hi @ainatersol, that would be very helpful if you could share a snippet or a colab that reproduce the error.",
"I had the same error using colab and followed younesbelkada advice and it resolved. \r\n> I resolved them by simply deleting the kernel and restarting it. Make sure to put all packages at the first cell. Let me know how that goes",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm running the notebook again. This time on WSL rather than Kaggle from a fresh conda environment (therefore with new installation of these packages) using Python 3.10.\r\nI get the same exact error:\r\n```\r\nImportError: Loading a GPTQ quantized model requires optimum (`pip install optimum`) and auto-gptq library (`pip install auto-gptq`)\r\n```\r\n\r\nCould it be the way the presence of packages is checked: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L41 ?",
"I resolved by commenting out this `elif` scope throwing the error https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3017\r\nThe notebook is ran fine afterwards. ",
"Hi @stwykd, thanks for investigating. Can you tell me what the following snippet returns: \r\n\r\n```python\r\nfrom transformers.utils import is_auto_gptq_available, is_optimum_available\r\nprint(is_auto_gptq_available())\r\nprint(is_optimum_available())\r\n```\r\n\r\nThis is strange that this is working when one of the following libraries are not available. Maybe there is an issue on how we check them. ",
"They both return True now.\r\nI've uncommented the lines I had removed. I also restarted the kernel and cleared any cache. The notebook seems good now",
"Thanks @SunMarc @stwykd ! Issue seems to be resolved, is it ok if we close it?",
"Yes, it can be closed now",
"In google colab, i reversed these lines and it started to work:\r\n\r\nfrom:\r\n```\r\n!pip install -U accelerate bitsandbytes datasets peft transformers\r\n!pip install auto_gptq\r\n!pip install optimum\r\n```\r\n\r\nto: \r\n```\r\n!pip install auto_gptq\r\n!pip install optimum\r\n!pip install -U accelerate bitsandbytes datasets peft transformers\r\n```"
] | 1,698 | 1,704 | 1,703 |
NONE
| null |
### System Info
I'm running on a Kaggle notebook using GPU T4 x2
### Who can help?
@younesbelkada @SunMarc
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py#L41 this function _is_package_available() might be misreporting optimum or auto-gptq as not available:
```
model = AutoModelForCausalLM.from_pretrained(mn, device_map=0, torch_dtype=torch.float16)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[14], line 1
----> 1 model = AutoModelForCausalLM.from_pretrained(mn, device_map=0, torch_dtype=torch.float16)
File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:563, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
561 elif type(config) in cls._model_mapping.keys():
562 model_class = _get_model_class(config, cls._model_mapping)
--> 563 return model_class.from_pretrained(
564 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
565 )
566 raise ValueError(
567 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
568 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
569 )
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:2572, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
2570 raise RuntimeError("GPU is required to quantize or run quantize model.")
2571 elif not (is_optimum_available() and is_auto_gptq_available()):
-> 2572 raise ImportError(
2573 "Loading GPTQ quantized model requires optimum library : `pip install optimum` and auto-gptq library 'pip install auto-gptq'"
2574 )
2575 else:
2576 # Need to protect the import
2577 from optimum.gptq import GPTQQuantizer
ImportError: Loading GPTQ quantized model requires optimum library : `pip install optimum` and auto-gptq library 'pip install auto-gptq'`
```
Here's some a filtered list of packages installed:
```
!pip list
Package Version Editable project location
---------------------------------------- --------------------- -------------------------
accelerate 0.22.0
auto-gptq 0.4.2
optimum 1.13.2
scipy 1.11.3
tensorflow 2.12.0
tokenizers 0.13.3
torch 2.0.0
```
### Expected behavior
I'd expect to be able to load the model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27095/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27094/events
|
https://github.com/huggingface/transformers/pull/27094
| 1,964,819,382 |
PR_kwDOCUB6oc5d74GF
| 27,094 |
Create example code problem fixed
|
{
"login": "Diksha-Binary-Ninja",
"id": 139619268,
"node_id": "U_kgDOCFJrxA",
"avatar_url": "https://avatars.githubusercontent.com/u/139619268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Diksha-Binary-Ninja",
"html_url": "https://github.com/Diksha-Binary-Ninja",
"followers_url": "https://api.github.com/users/Diksha-Binary-Ninja/followers",
"following_url": "https://api.github.com/users/Diksha-Binary-Ninja/following{/other_user}",
"gists_url": "https://api.github.com/users/Diksha-Binary-Ninja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Diksha-Binary-Ninja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Diksha-Binary-Ninja/subscriptions",
"organizations_url": "https://api.github.com/users/Diksha-Binary-Ninja/orgs",
"repos_url": "https://api.github.com/users/Diksha-Binary-Ninja/repos",
"events_url": "https://api.github.com/users/Diksha-Binary-Ninja/events{/privacy}",
"received_events_url": "https://api.github.com/users/Diksha-Binary-Ninja/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
The error you're encountering is due to the fact that the model you're using, "amazon/MistralLite," doesn't support Flash Attention 2.0. Flash Attention 2.0 is a specific type of attention mechanism, and not all models in the Transformers library are compatible with it.
To resolve this issue, you have a few options:
Use a Different Model: If you specifically need to use Flash Attention 2.0, you'll need to choose a different model that supports it. You can search for models in the Transformers library that are explicitly designed with Flash Attention 2.0, or check for updates to the "amazon/MistralLite" model that may include support for it in the future.
Disable Flash Attention 2.0: If Flash Attention 2.0 is not a strict requirement for your task, you can disable it by removing the use_flash_attention_2=True parameter when loading the model. You can simply use the model like this: model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto") This way, you'll use the model with its default attention mechanism.
Request Support: As the error message suggests, you can consider opening an issue on GitHub to request support for Flash Attention 2.0 in the "amazon/MistralLite" model. This may not provide an immediate solution but could help in the long run. Choose the option that best fits your needs and requirements for your project. If Flash Attention 2.0 is not crucial for your task, option 2 (disabling it) is the quickest way to proceed without modifying the model itself.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27094/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27094",
"html_url": "https://github.com/huggingface/transformers/pull/27094",
"diff_url": "https://github.com/huggingface/transformers/pull/27094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27094.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27093/events
|
https://github.com/huggingface/transformers/pull/27093
| 1,964,769,076 |
PR_kwDOCUB6oc5d7tXP
| 27,093 |
Translate index.md to Turkish
|
{
"login": "mertyyanik",
"id": 32648818,
"node_id": "MDQ6VXNlcjMyNjQ4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/32648818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mertyyanik",
"html_url": "https://github.com/mertyyanik",
"followers_url": "https://api.github.com/users/mertyyanik/followers",
"following_url": "https://api.github.com/users/mertyyanik/following{/other_user}",
"gists_url": "https://api.github.com/users/mertyyanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mertyyanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mertyyanik/subscriptions",
"organizations_url": "https://api.github.com/users/mertyyanik/orgs",
"repos_url": "https://api.github.com/users/mertyyanik/repos",
"events_url": "https://api.github.com/users/mertyyanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mertyyanik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @mertyyanik ! Thank you for working on translating the doc page into Turkish.\r\nFrom what I see, this is the first time a doc page has been translated into Turkish. As such, you'll need to modify two more files in order to build the docs in a new language - `.github/workflows/build_documentation.yml` and `.github/workflows/build_pr_documentation.yml`.
\r\n\r\nAdd the two-letter code for your language to the list of languages. You can look up the code [here](https://www.loc.gov/standards/iso639-2/php/code_list.php).\r\nAlso pinging @merveenoyan for a review of the Turkish translation :) ",
"I made the changes. Thanks @MKhalusova and @merveenoyan ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27093). All of your documentation changes will be reflected on that endpoint.",
"Hello, @merveenoyan. I need a review. When it's done, I will proceed to the other sections.",
"Thank you for your work, we can merge it now! "
] | 1,698 | 1,699 | 1,699 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated `index.md` to Turkish language.
Part of #27088
## Who can review?
@stevhliu, @MKhalusova, @merveenoyan
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27093/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27093",
"html_url": "https://github.com/huggingface/transformers/pull/27093",
"diff_url": "https://github.com/huggingface/transformers/pull/27093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27093.patch",
"merged_at": 1699450520000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27092/events
|
https://github.com/huggingface/transformers/issues/27092
| 1,964,346,542 |
I_kwDOCUB6oc51FYyu
| 27,092 |
example code problem
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"This is fixed by #26464 and will be in the next release"
] | 1,698 | 1,698 | null |
NONE
| null |
while trying the example from https://huggingface.co/amazon/MistralLite This is the result.
python examplecode.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/home/chris/ai/text-generation-webui/amazonmistral/examplecode.py", line 8, in <module>
model = AutoModelForCausalLM.from_pretrained(model_id,
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
return model_class.from_pretrained(
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3076, in from_pretrained
config = cls._check_and_enable_flash_attn_2(config, torch_dtype=torch_dtype, device_map=device_map)
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1265, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27092/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27091/events
|
https://github.com/huggingface/transformers/pull/27091
| 1,964,269,938 |
PR_kwDOCUB6oc5d6CSi
| 27,091 |
Added huggingface emoji instead of the markdown format
|
{
"login": "shettyvarshaa",
"id": 112955692,
"node_id": "U_kgDOBruRLA",
"avatar_url": "https://avatars.githubusercontent.com/u/112955692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shettyvarshaa",
"html_url": "https://github.com/shettyvarshaa",
"followers_url": "https://api.github.com/users/shettyvarshaa/followers",
"following_url": "https://api.github.com/users/shettyvarshaa/following{/other_user}",
"gists_url": "https://api.github.com/users/shettyvarshaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shettyvarshaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shettyvarshaa/subscriptions",
"organizations_url": "https://api.github.com/users/shettyvarshaa/orgs",
"repos_url": "https://api.github.com/users/shettyvarshaa/repos",
"events_url": "https://api.github.com/users/shettyvarshaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/shettyvarshaa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27091). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
Added huggingface emoji instead of the markdown format as it was not displaying the required emoji in that format
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27067
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27091/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27091",
"html_url": "https://github.com/huggingface/transformers/pull/27091",
"diff_url": "https://github.com/huggingface/transformers/pull/27091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27091.patch",
"merged_at": 1698354616000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27090/events
|
https://github.com/huggingface/transformers/pull/27090
| 1,964,199,494 |
PR_kwDOCUB6oc5d5yv7
| 27,090 |
Fix no split modules underlying modules
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this do ?
This PR fixes `_get_no_split_modules` introduced in #26949. Basically, we don't check the children modules if they appear in `_no_split_modules`. By doing that, we can decide to not split a children module (can be a `PretrainedModel`) without having to check if its `_no_split_modules` is set or not. This is particularly useful as we don't necessarily want to set it (small model, can't be split ...).
This PR along with this [one](https://github.com/huggingface/transformers/pull/27089) should also fix the issues in the CI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27090/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27090",
"html_url": "https://github.com/huggingface/transformers/pull/27090",
"diff_url": "https://github.com/huggingface/transformers/pull/27090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27090.patch",
"merged_at": 1698414560000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27089/events
|
https://github.com/huggingface/transformers/pull/27089
| 1,964,168,734 |
PR_kwDOCUB6oc5d5rwe
| 27,089 |
fix detr device map
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, not every `ForObjectDetection` models needs this. For example, the `TableTransformerForObjectDetection` don't modify the weights at initialization. I will add a comment. "
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this PR do ?
This PR solves an issue that #26849 introduced. For `DeformableDetrForObjectDetection` and `DetaForObjectDetection`, we can't initialize the model on `meta` device as we modify the weights during the initialization.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27089/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27089",
"html_url": "https://github.com/huggingface/transformers/pull/27089",
"diff_url": "https://github.com/huggingface/transformers/pull/27089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27089.patch",
"merged_at": 1698416892000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27088/events
|
https://github.com/huggingface/transformers/issues/27088
| 1,964,081,611 |
I_kwDOCUB6oc51EYHL
| 27,088 |
[i18n-TR] Translating docs to Turkish
|
{
"login": "mertyyanik",
"id": 32648818,
"node_id": "MDQ6VXNlcjMyNjQ4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/32648818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mertyyanik",
"html_url": "https://github.com/mertyyanik",
"followers_url": "https://api.github.com/users/mertyyanik/followers",
"following_url": "https://api.github.com/users/mertyyanik/following{/other_user}",
"gists_url": "https://api.github.com/users/mertyyanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mertyyanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mertyyanik/subscriptions",
"organizations_url": "https://api.github.com/users/mertyyanik/orgs",
"repos_url": "https://api.github.com/users/mertyyanik/repos",
"events_url": "https://api.github.com/users/mertyyanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mertyyanik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false | null |
[] |
[
"I will work index.md first."
] | 1,698 | 1,701 | null |
CONTRIBUTOR
| null |
Hi!
Let's bring the documentation to all the Turkish-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `tr` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `tr/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). (In progress by @mertyyanik )
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27088/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27088/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27087/events
|
https://github.com/huggingface/transformers/issues/27087
| 1,963,986,267 |
I_kwDOCUB6oc51EA1b
| 27,087 |
Make it possible to pass `torch_dtype` argument as a string so it's easier to serialize
|
{
"login": "tokestermw",
"id": 4722119,
"node_id": "MDQ6VXNlcjQ3MjIxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4722119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tokestermw",
"html_url": "https://github.com/tokestermw",
"followers_url": "https://api.github.com/users/tokestermw/followers",
"following_url": "https://api.github.com/users/tokestermw/following{/other_user}",
"gists_url": "https://api.github.com/users/tokestermw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tokestermw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tokestermw/subscriptions",
"organizations_url": "https://api.github.com/users/tokestermw/orgs",
"repos_url": "https://api.github.com/users/tokestermw/repos",
"events_url": "https://api.github.com/users/tokestermw/events{/privacy}",
"received_events_url": "https://api.github.com/users/tokestermw/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false | null |
[] |
[
"I am happy to take a look at this! "
] | 1,698 | 1,707 | 1,707 |
CONTRIBUTOR
| null |
### Feature request
Hi, it would be nice if we can pass `torch_dtype` as a string.
```python
from transformers import pipeline
# this
m = pipeline('text-generation', 'Mistral-7B-Instruct-v0.1', device=0, low_cpu_mem_usage=True, torch_dtype=torch.float16)
# or that
m = pipeline('text-generation', 'Mistral-7B-Instruct-v0.1', device=0, low_cpu_mem_usage=True, torch_dtype='float16')
```
### Motivation
* It's common to use a config file (e.g. yaml) to pass in arguments to `from_pretrained`, but `torch_dtype` requires `torch`. e.g. we can't use the `assemble` method in [spacy-llm](https://github.com/explosion/spacy-llm/blob/7a0460ce112ae1fe783dd100b4d32e56c282919f/spacy_llm/util.py#L37).
* For small GPUs (e.g. T4), we can't load any models 7B and higher (cuda memory error) unless specifying `torch_dtype=torch.float16` (`torch_dtype='auto'` doesn't work either) (assuming not using 8-bit quantization or lower).
### Your contribution
Yes can make a PR so if `torch_dtype` is a string, we convert to `torch` type.
e.g. `'float16'` to `torch.float16`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27087/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27086/events
|
https://github.com/huggingface/transformers/pull/27086
| 1,963,808,906 |
PR_kwDOCUB6oc5d4cd3
| 27,086 |
[Attention Mask] Refactor all encoder-decoder attention mask
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the fast reviews everyone! Ran slow tests\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0\" RUN_SLOW=1 pytest ...\r\n```\r\n\r\nfor:\r\n- Mistral\r\n- Falcon\r\n- LLama\r\n- Whisper\r\n- Bart\r\n- LED (exceptional model)\r\n- Bloom (uses boolean mask)\r\n\r\nThink this should be good enough! ",
"@patrickvonplaten I know this PR has been merged, but I have a question regarding the Pytorch version of BART. The implementation assumes the decoder is always used in an autoregressive manner [Pytorch version](https://github.com/huggingface/transformers/blob/8211c59b9a8fe84d2861446b26542f89a0260e64/src/transformers/models/bart/modeling_bart.py#L998) unlike the flax [version](https://github.com/huggingface/transformers/blob/8211c59b9a8fe84d2861446b26542f89a0260e64/src/transformers/models/bart/modeling_flax_bart.py#L262) . There could be cases of the decoder being used as an \"encoder\" and a \"cross attention\". In this case, the autoregressive nature is not required. While I think the default should be the autoregressive manner, but if `is_decoder` is set to false, the non-causal masking operation should be performed instead.",
"> non-causal masking operation should be performed instead.\r\n\r\n@DavidAkinpelu I think you linked the FlaxAttention class, not the FlaxDecoder class above. In PT the Attention class can also be used in non-causal model, just like in Flax. If you want to use Bart in non-auto-regressive mode why don't you use BartEncoder? ",
"@patrickvonplaten This paper got me thinking in that direction [Mores+](https://arxiv.org/pdf/2205.04275.pdf)."
] | 1,698 | 1,699 | 1,698 |
MEMBER
| null |
# What does this PR do?
This PR refactors the attention mask of all PT Seq2Seq models. While this is a nice improvement of life, it is also necessary to effectively add FA2 and SDPA to PT Seq2Seq models (without having to change 54+ files).
In a follow-up PR it'll be much easier to add FA2 to just Bart and most important Bart-like models.
The PR slightly goes against the single-file policy, but attention masks are really the same across models and there is also only so much they can be (causal, non-causal, windowed). I think it doesn't really hurt readability as the function are very clearly defined (create 4d attention mask from 2d).
For some very big exceptions (I found only one which is LED, see comment [here](https://github.com/huggingface/transformers/pull/27086#discussion_r1374403602)), we could just write part of the mask creation separately as is done.
I could also give the mask creation functions a `_` to make it clearer that they are private mehtods in Transformers. Both keeping as is or changing is good for me.
@amyeroberts @LysandreJik @ArthurZucker this is ready for a review!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27086/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27086/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27086",
"html_url": "https://github.com/huggingface/transformers/pull/27086",
"diff_url": "https://github.com/huggingface/transformers/pull/27086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27086.patch",
"merged_at": 1698417721000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27085/events
|
https://github.com/huggingface/transformers/pull/27085
| 1,963,774,144 |
PR_kwDOCUB6oc5d4U2R
| 27,085 |
[`T5Tokenizer`] Fix fast and extra tokens
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
COLLABORATOR
| null |
# What does this PR do?
fixes #26951, where the extra ids were checked differently for fast and slow
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27085/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27085",
"html_url": "https://github.com/huggingface/transformers/pull/27085",
"diff_url": "https://github.com/huggingface/transformers/pull/27085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27085.patch",
"merged_at": 1698387504000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27084/events
|
https://github.com/huggingface/transformers/pull/27084
| 1,963,679,219 |
PR_kwDOCUB6oc5d4AFx
| 27,084 |
MusicGen: Add Stereo Model
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Is the format we gave (interleaving) forced by BC? Otherwise storing in two list or tuples would be better IMO (kind of like how audio are stored no?) \r\nmy only \"complain\" here",
"The original model is designed to predict them in an interleaved way:\r\n```\r\n[left_codebook_1, right_codebook_1, left_codebook_2, right_codebook_2, ..., left_codebook_4, right_codebook_4]\r\n```\r\nWe could change this to predict left first, then right:\r\n```\r\n[left_codebook_1, left_codebook_2, ..., left_codebook_4, right_codebook_1, right_codebook_2, ..., right_codebook_4]\r\n```\r\nWhich would require re-shaping the LM head weights, and duplicating the pattern mask along the row dimension. Overall I think the complexity would be similar to the interleaved way we have now.\r\n\r\nBut predicting two sets of codebooks as two separate tuples would break compatibility with the existing mono musicgen, or otherwise complicate the code since we'll have different inputs / sampling logic depending on whether we're mono / stereo.",
"Awesome thanks for explaining"
] | 1,698 | 1,699 | 1,699 |
CONTRIBUTOR
| null |
# What does this PR do?
The original MusicGen model generates mono (1-channel) outputs. It does this by predicting a set of 4 codebooks at each generation step:
```
[codebook_1, codebook_2, codebook_3, codebook_4]
```
After generating, the sequence of predicted codebooks is passed through the EnCodec model to get the final waveform.
This PR adds the MusicGen **stereo** model. It works by predicting **two** sets of codebooks at each step. One set of codebooks corresponds to the left channel, the other set corresponds to the right channel. The sets of codebooks are interleaved as follows:
```
[left_codebook_1, right_codebook_1, left_codebook_2, right_codebook_2, ..., left_codebook_4, right_codebook_4]
```
After generating, the sequence of generated codebooks are partitioned into their left/right parts, and then each sequence passed through EnCodec to get the left/right waveform respectively.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27084/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27084/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27084",
"html_url": "https://github.com/huggingface/transformers/pull/27084",
"diff_url": "https://github.com/huggingface/transformers/pull/27084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27084.patch",
"merged_at": 1699449962000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27083/events
|
https://github.com/huggingface/transformers/pull/27083
| 1,963,651,050 |
PR_kwDOCUB6oc5d35-S
| 27,083 |
Fuyu processor: box coordinates
|
{
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27083). All of your documentation changes will be reflected on that endpoint.",
"Can I use AutoModelForCausalLM and AutoProcessor instead of using Fuyu-specific pipelines?",
"Hi @adhikjoshi, yes, you can load both the Fuyu model and its processor using `AutoModelForCausalLM` and `AutoProcessor` respectively"
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this PR do?
PoC to post-process box coordinates returned by the model. The following should work:
```py
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = FuyuForCausalLM.from_pretrained(model_id, device_map=device, torch_dtype=dtype)
processor = FuyuProcessor(image_processor=FuyuImageProcessor(), tokenizer=tokenizer)
# Prompt appropriate for bounding box detection
text = "statistics"
prompt = f"When presented with a box, perform OCR to extract text contained within it. If provided with text, generate the corresponding bounding box.\n{text}"
image = Image.open("screen2words_ui_example.png")
model_inputs = processor(text=prompt, images=[image]).to(device)
generation_output = model.generate(**model_inputs, max_new_tokens=40)
results = processor.post_process_box_coordinates(generation_output, target_sizes=torch.Tensor([image.size[::-1]]))
# TODO: maybe unbox the <box> here as well??
decoded = processor.decode(results[0], skip_special_tokens=True)
print(decoded)
# <box>60, 124, 100, 268</box>
```
I'd like to validate whether this approach is appropriate, what do you think @amyeroberts? If it is, then we can:
- Support `point` coordinates too.
- Perform the reverse transformations on input prompts. There's already code in the processor for that purpose, I think we could maybe simplify it a bit.
- Maybe provide an optional resizing + padding pre-processing step for images, only for the bounding box detection task. According to our conversations with the original authors (and our tests), this task only works properly when the input image size is close to `(1080, 1920)`. The correct approach is to downscale larger images, and then pad to match that size.
----
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, @molbap
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27083/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27083",
"html_url": "https://github.com/huggingface/transformers/pull/27083",
"diff_url": "https://github.com/huggingface/transformers/pull/27083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27083.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27082/events
|
https://github.com/huggingface/transformers/issues/27082
| 1,963,631,944 |
I_kwDOCUB6oc51CqVI
| 27,082 |
accelerate training confusion
|
{
"login": "zwhus",
"id": 121282623,
"node_id": "U_kgDOBzqgPw",
"avatar_url": "https://avatars.githubusercontent.com/u/121282623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zwhus",
"html_url": "https://github.com/zwhus",
"followers_url": "https://api.github.com/users/zwhus/followers",
"following_url": "https://api.github.com/users/zwhus/following{/other_user}",
"gists_url": "https://api.github.com/users/zwhus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zwhus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zwhus/subscriptions",
"organizations_url": "https://api.github.com/users/zwhus/orgs",
"repos_url": "https://api.github.com/users/zwhus/repos",
"events_url": "https://api.github.com/users/zwhus/events{/privacy}",
"received_events_url": "https://api.github.com/users/zwhus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, pretty sure you can still launch with `accelerate launch` so what is the issue you are having? ",
"ALso would be relevant to use the latest version of transformers! ",
"I would like to know whether the model is using FSDP for training after I start accelerate launch normally and configure FSDP under transformers= 4.28.0.Dev0(https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/trainer.py#L213), because the code of this version does not initialize this part.\r\n",
"I don't quite understand whether deepspeed or FSDP can be used normally during training after deepspeed or FSDP is set using `accelerate config` in transformers= 4.28.0.Dev0.",
"Could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\nAnd it does not seem to be a bug or an issue\r\nThanks!",
"Ok, Thanks for your reply\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
I have some questions about training with accelerate.
When I use transformers==4.28.0.dev0 and configure accelerate, the specific configuration is as follows:
```
- `Accelerate` version: 0.23.0
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Numpy version: 1.22.4
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 1007.61 GB
- GPU type: NVIDIA A800-SXM4-80GB
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wrap': 'LlamaDecoderLayer', 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
And I can run the following command :
`
accelerate launch --num_processes 4 --main_process_port 23786 xx.py config/xx.py
`
But in 4.28.0. Dev0 code not initialize Accelerate(https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/trainer.py#L213), may I ask why accelerate can be started and whether FSDP in it works?
Because I observed in transformers = = 4.31.1 (https://github.com/huggingface/transformers/blob/v4.34.1/src/transformers/trainer.py#L209), It seems reasonable that accelerate and FSDP are initialized in the code.
Looking forward to your reply.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27081/events
|
https://github.com/huggingface/transformers/pull/27081
| 1,963,265,557 |
PR_kwDOCUB6oc5d2j14
| 27,081 |
make tests of pytorch_example device agnostic
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27081). All of your documentation changes will be reflected on that endpoint.",
"@amyeroberts This is a continuation of (the merged) #25870. There might be more such PRs in the future."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of https://github.com/huggingface/transformers/issues/25654
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @ydshieh @fxmarty @zhangsibo1129 @arsalanu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27081/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27081",
"html_url": "https://github.com/huggingface/transformers/pull/27081",
"diff_url": "https://github.com/huggingface/transformers/pull/27081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27081.patch",
"merged_at": 1698677801000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27080/events
|
https://github.com/huggingface/transformers/pull/27080
| 1,963,113,970 |
PR_kwDOCUB6oc5d2CjF
| 27,080 |
Remove unneeded prints in modeling_gpt_neox.py
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27080). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
As per title
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27080/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27080",
"html_url": "https://github.com/huggingface/transformers/pull/27080",
"diff_url": "https://github.com/huggingface/transformers/pull/27080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27080.patch",
"merged_at": 1698314131000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27079/events
|
https://github.com/huggingface/transformers/pull/27079
| 1,963,021,126 |
PR_kwDOCUB6oc5d1uXs
| 27,079 |
Bump`flash_attn` version to `2.1`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
FA-2.0 seems to cause several issues such as https://github.com/huggingface/transformers/issues/26697 / https://github.com/huggingface/transformers/issues/27056
As discussed offline, I propose to bump the FA-2 version to at least 2.1 to overcome these issues in the future
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27079/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27079",
"html_url": "https://github.com/huggingface/transformers/pull/27079",
"diff_url": "https://github.com/huggingface/transformers/pull/27079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27079.patch",
"merged_at": 1698312065000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27078/events
|
https://github.com/huggingface/transformers/issues/27078
| 1,962,816,859 |
I_kwDOCUB6oc50_jVb
| 27,078 |
'LlamaTokenizerFast' object has no attribute 'apply_chat_template'
|
{
"login": "Manaschauhan28",
"id": 107470066,
"node_id": "U_kgDOBmfc8g",
"avatar_url": "https://avatars.githubusercontent.com/u/107470066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Manaschauhan28",
"html_url": "https://github.com/Manaschauhan28",
"followers_url": "https://api.github.com/users/Manaschauhan28/followers",
"following_url": "https://api.github.com/users/Manaschauhan28/following{/other_user}",
"gists_url": "https://api.github.com/users/Manaschauhan28/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Manaschauhan28/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manaschauhan28/subscriptions",
"organizations_url": "https://api.github.com/users/Manaschauhan28/orgs",
"repos_url": "https://api.github.com/users/Manaschauhan28/repos",
"events_url": "https://api.github.com/users/Manaschauhan28/events{/privacy}",
"received_events_url": "https://api.github.com/users/Manaschauhan28/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, can you make sure you are using the latest release of `transformers`? And if upgrading to the latest version does not work share the output of `transformers-cli env`",
"\r\n",
"I can't reproduce this. The following snippets works outof the box for me: \r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\r\n>>> chat = [\r\n{\"role\": \"user\", \"content\": \"Hello, how are you?\"},\r\n{\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\r\n{\"role\": \"user\", \"content\": \"I'd like to show off how chat templating works!\"},\r\n ]\r\n>>> tokenizer.use_default_system_prompt = False\r\n>>> tokenizer.apply_chat_template(chat, tokenize=False)\r\n\"<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST]\"\r\n```",
"\r\n\r\nI tried the same and getting the same error \r\n",
"can you print `print(transformers.__version__)`",
"Thanks for your help man. It solved with the latest version of transformers==4.34.1\r\n",
"Glad I helped 🤗 ",
"Note: If you install `transformers` through conda, it'll install an older version. For me, it installed v4.32.1 instead of the current pip version 4.36.2.",
"> Note: If you install `transformers` through conda, it'll install an older version. For me, it installed v4.32.1 instead of the current pip version 4.36.2.\r\n\r\nThis helped a lot, thanks - had to \"pip install\" within the conda environment to get transformers updated."
] | 1,698 | 1,705 | 1,698 |
NONE
| null |
### System Info
I have tried the same code available on the huggingface hub but for me it is showing the error [Check here](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat/blob/main/app.py)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Please look into this
### Expected behavior
It should work well as it has inherited from the transformers
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27078/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27078/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27077/events
|
https://github.com/huggingface/transformers/pull/27077
| 1,962,791,984 |
PR_kwDOCUB6oc5d08to
| 27,077 |
Added Telugu [te] translation for README.md in main
|
{
"login": "hakunamatata1997",
"id": 24734119,
"node_id": "MDQ6VXNlcjI0NzM0MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/24734119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hakunamatata1997",
"html_url": "https://github.com/hakunamatata1997",
"followers_url": "https://api.github.com/users/hakunamatata1997/followers",
"following_url": "https://api.github.com/users/hakunamatata1997/following{/other_user}",
"gists_url": "https://api.github.com/users/hakunamatata1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hakunamatata1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hakunamatata1997/subscriptions",
"organizations_url": "https://api.github.com/users/hakunamatata1997/orgs",
"repos_url": "https://api.github.com/users/hakunamatata1997/repos",
"events_url": "https://api.github.com/users/hakunamatata1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/hakunamatata1997/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27077). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu can you review this?"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27077/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27077",
"html_url": "https://github.com/huggingface/transformers/pull/27077",
"diff_url": "https://github.com/huggingface/transformers/pull/27077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27077.patch",
"merged_at": 1698432011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27076/events
|
https://github.com/huggingface/transformers/pull/27076
| 1,962,688,601 |
PR_kwDOCUB6oc5d0mdr
| 27,076 |
🌐 [i18n-ZH] Translate serialization.md into Chinese
|
{
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27076). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Translate serialization.md into Chinese
part of #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27076/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27076/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27076",
"html_url": "https://github.com/huggingface/transformers/pull/27076",
"diff_url": "https://github.com/huggingface/transformers/pull/27076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27076.patch",
"merged_at": 1698681030000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27075/events
|
https://github.com/huggingface/transformers/issues/27075
| 1,962,598,000 |
I_kwDOCUB6oc50-t5w
| 27,075 |
Rating on NVIDIA A30 GPU is lower than RTX2080 and equal to GTX1070
|
{
"login": "ThangPetros",
"id": 77918304,
"node_id": "MDQ6VXNlcjc3OTE4MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/77918304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThangPetros",
"html_url": "https://github.com/ThangPetros",
"followers_url": "https://api.github.com/users/ThangPetros/followers",
"following_url": "https://api.github.com/users/ThangPetros/following{/other_user}",
"gists_url": "https://api.github.com/users/ThangPetros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThangPetros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThangPetros/subscriptions",
"organizations_url": "https://api.github.com/users/ThangPetros/orgs",
"repos_url": "https://api.github.com/users/ThangPetros/repos",
"events_url": "https://api.github.com/users/ThangPetros/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThangPetros/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey, the model you are using seems to be using `SourceFileLoader` , and custom code.\r\nFirst I would recommend you to use `trust_remote_code = True` and properly register your model to avoid using `model = SourceFileLoader(\"model\", cached_path(hf_bucket_url(model_name,filename=\"model_handling.py\"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)\r\np`. \r\nNow you have custom code and I can't debug it for you. If you find that the models we natively supports are indeed slower on better hardwares we could dig in but otherwise would recommend you to ask on the [forum](https://discuss.huggingface.co/) instead as we reserve github for issues or bugs. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
### Model description
Currently I'm evaluating a model on HuggingFace, it's called model_name = 'nguyenvulebinh/wav2vec2-base-vi'
When running on GTX1070, I evaluated the average result between audio track time and text conversion time and the result was 63x
With RTX2080 it is 130x
As for the A30, it is 68x
Is there any conflict and why the A30 should have achieved the highest results but it did not meet expectations!
I'm a newbie so I really hope for help from everyone
Thank!
```python
#!pip install transformers==4.20.0
#!pip install https://github.com/kpu/kenlm/archive/master.zip
#!pip install pyctcdecode==0.4.0
#!pip install huggingface_hub==0.10.0
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-large-vi-vlsp2020"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)`
```
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27075/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27074/events
|
https://github.com/huggingface/transformers/pull/27074
| 1,962,287,240 |
PR_kwDOCUB6oc5dzOgz
| 27,074 |
[Llama FA2] Re-add _expand_attention_mask and clean a couple things
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@ArthurZucker could you give this a quick review? It'd make the Bart FA PR much easier to continue and should also fix the better transformers problem with optimum",
"_The documentation is not available anymore as the PR was closed or merged._",
"Of course! "
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this PR do?
This PR cleans the attention mask converter a bit more, corrects some docstrings and removes outdated comments and deprecates `_expand_attention_mask` to fix optimum.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27074/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27074",
"html_url": "https://github.com/huggingface/transformers/pull/27074",
"diff_url": "https://github.com/huggingface/transformers/pull/27074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27074.patch",
"merged_at": 1698318381000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27073/events
|
https://github.com/huggingface/transformers/pull/27073
| 1,961,993,199 |
PR_kwDOCUB6oc5dyO5U
| 27,073 |
[`core`/ `gradient_checkpointing`] Refactor GC - part 2
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker @LysandreJik - as discussed offline now this PR reverts back the previous behaviour (i.e. if a user sets `module.gradient_checkpointing = True` in a module that supports it, everthing should work fine) + I have set `gradient_checkpointing_func` as a private attribute. This PR is ready for review"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Extends https://github.com/huggingface/transformers/pull/27020 by further simplifying the GC enable / disable mechanism. We can simply iterate over all submodules of the `PreTrainedModel` and check for the attribute `gradient_checkpointing`.
Some models had `supports_gradient_checkpointing` attribute set to `True` whereas they actually don't. So this PR fixes that as well.
Some models were also calling `torch.utils.checkpointing.checkpoint` instead of `self.gradient_checkpointing_func`, this PR fixes it.
Also `gradient_checkpointing` is now private to avoid exposing it as a public attribute
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27073/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27073",
"html_url": "https://github.com/huggingface/transformers/pull/27073",
"diff_url": "https://github.com/huggingface/transformers/pull/27073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27073.patch",
"merged_at": 1698416123000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27072/events
|
https://github.com/huggingface/transformers/pull/27072
| 1,961,985,315 |
PR_kwDOCUB6oc5dyNOv
| 27,072 |
Bump werkzeug from 2.2.3 to 3.0.1 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/releases">werkzeug's releases</a>.</em></p>
<blockquote>
<h2>3.0.1</h2>
<p>This is a security release for the 3.0.x feature branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-1">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-1</a></li>
</ul>
<h2>3.0.0</h2>
<p>This is a feature release, which includes new features, removes previously deprecated code, and adds new deprecations. The 3.0.x branch is now the supported fix branch, the 2.3.x branch will become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades. Test with warnings treated as errors to be able to adapt to deprecation warnings early.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-0">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/21?closed=1">https://github.com/pallets/werkzeug/milestone/21?closed=1</a></li>
</ul>
<h2>2.3.7</h2>
<p>This is a fix release for the 2.3.x feature branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-7">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-7</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/33?closed=1">https://github.com/pallets/werkzeug/milestone/33?closed=1</a></li>
</ul>
<h2>2.3.6</h2>
<p>This is a fix release for the 2.3.x feature branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-6">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-6</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/32?closed=1">https://github.com/pallets/werkzeug/milestone/32?closed=1</a></li>
</ul>
<h2>2.3.5</h2>
<p>This is a fix release for the 2.3.x feature branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-5">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-5</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/31?closed=1">https://github.com/pallets/werkzeug/milestone/31?closed=1</a></li>
</ul>
<h2>2.3.4</h2>
<p>This is a fix release for the 2.3.x release branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-4">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-4</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/30?closed=1">https://github.com/pallets/werkzeug/milestone/30?closed=1</a></li>
</ul>
<h2>2.3.3</h2>
<p>This is a fix release for the 2.3.x release branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-3">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/29?closed=1">https://github.com/pallets/werkzeug/milestone/29?closed=1</a></li>
</ul>
<h2>2.3.2</h2>
<p>This is a fix release for the 2.3.x release branch.</p>
<ul>
<li>Changes: <a href="https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-2">https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/werkzeug/milestone/28?closed=1">https://github.com/pallets/werkzeug/milestone/28?closed=1</a></li>
</ul>
<h2>2.3.1</h2>
<p>This is a fix release for the 2.3.x release branch.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.0.1</h2>
<p>Released 2023-10-24</p>
<ul>
<li>Fix slow multipart parsing for large parts potentially enabling DoS
attacks. :cwe:<code>CWE-407</code></li>
</ul>
<h2>Version 3.0.0</h2>
<p>Released 2023-09-30</p>
<ul>
<li>Remove previously deprecated code. :pr:<code>2768</code></li>
<li>Deprecate the <code>__version__</code> attribute. Use feature detection, or
<code>importlib.metadata.version("werkzeug")</code>, instead. :issue:<code>2770</code></li>
<li><code>generate_password_hash</code> uses scrypt by default. :issue:<code>2769</code></li>
<li>Add the <code>"werkzeug.profiler"</code> item to the WSGI <code>environ</code> dictionary
passed to <code>ProfilerMiddleware</code>'s <code>filename_format</code> function. It contains
the <code>elapsed</code> and <code>time</code> values for the profiled request. :issue:<code>2775</code></li>
<li>Explicitly marked the PathConverter as non path isolating. :pr:<code>2784</code></li>
</ul>
<h2>Version 2.3.8</h2>
<p>Unreleased</p>
<h2>Version 2.3.7</h2>
<p>Released 2023-08-14</p>
<ul>
<li>Use <code>flit_core</code> instead of <code>setuptools</code> as build backend.</li>
<li>Fix parsing of multipart bodies. :issue:<code>2734</code> Adjust index of last newline
in data start. :issue:<code>2761</code></li>
<li>Parsing ints from header values strips spacing first. :issue:<code>2734</code></li>
<li>Fix empty file streaming when testing. :issue:<code>2740</code></li>
<li>Clearer error message when URL rule does not start with slash. :pr:<code>2750</code></li>
<li><code>Accept</code> <code>q</code> value can be a float without a decimal part. :issue:<code>2751</code></li>
</ul>
<h2>Version 2.3.6</h2>
<p>Released 2023-06-08</p>
<ul>
<li><code>FileStorage.content_length</code> does not fail if the form data did not provide a
value. :issue:<code>2726</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/werkzeug/commit/ce4eff5902d4a6b41a20ecc6e4029741284a87fd"><code>ce4eff5</code></a> Release version 3.0.1</li>
<li><a href="https://github.com/pallets/werkzeug/commit/b1916c0c083e0be1c9d887ee2f3d696922bfc5c1"><code>b1916c0</code></a> Fix: slow multipart parsing for huge files with few CR/LF characters</li>
<li><a href="https://github.com/pallets/werkzeug/commit/726eaa28593d859548da3477859c914732f012ef"><code>726eaa2</code></a> Release version 3.0.0</li>
<li><a href="https://github.com/pallets/werkzeug/commit/64275425888b6ca4f5ebdfa1a9df814317718290"><code>6427542</code></a> Default the PathConverter (and descendants) to be non part isolating</li>
<li><a href="https://github.com/pallets/werkzeug/commit/4820d8c487e5db9f43645c31c4123fce5ac5ad32"><code>4820d8c</code></a> Provide elapsed and timestamp info to filename_format</li>
<li><a href="https://github.com/pallets/werkzeug/commit/599993d7382eeb96add9f38b4431a2f50cd2c9f2"><code>599993d</code></a> Bump pypa/gh-action-pypi-publish from 1.8.8 to 1.8.10 (<a href="https://redirect.github.com/pallets/werkzeug/issues/2780">#2780</a>)</li>
<li><a href="https://github.com/pallets/werkzeug/commit/a2394ed51ed8697b5523243acb10cb589c0f7834"><code>a2394ed</code></a> Bump slsa-framework/slsa-github-generator from 1.7.0 to 1.9.0 (<a href="https://redirect.github.com/pallets/werkzeug/issues/2779">#2779</a>)</li>
<li><a href="https://github.com/pallets/werkzeug/commit/1efd6f3c2c31ec9479d8b8d9219bdb042e55bd15"><code>1efd6f3</code></a> Bump actions/checkout from 3.5.3 to 3.6.0 (<a href="https://redirect.github.com/pallets/werkzeug/issues/2778">#2778</a>)</li>
<li><a href="https://github.com/pallets/werkzeug/commit/76a5419d2ee8b7785c0304d58a94d6c0387c976c"><code>76a5419</code></a> Bump pypa/gh-action-pypi-publish from 1.8.8 to 1.8.10</li>
<li><a href="https://github.com/pallets/werkzeug/commit/ce8cfe7dbb73b56c982a9c74162084cdb284c2f5"><code>ce8cfe7</code></a> Bump slsa-framework/slsa-github-generator from 1.7.0 to 1.9.0</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/werkzeug/compare/2.2.3...3.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27072/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27072",
"html_url": "https://github.com/huggingface/transformers/pull/27072",
"diff_url": "https://github.com/huggingface/transformers/pull/27072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27072.patch",
"merged_at": 1698303388000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27071/events
|
https://github.com/huggingface/transformers/pull/27071
| 1,961,686,791 |
PR_kwDOCUB6oc5dxM0Q
| 27,071 |
[docstring] fix incorrect llama docstring: encoder -> decoder
|
{
"login": "ztjhz",
"id": 59118459,
"node_id": "MDQ6VXNlcjU5MTE4NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ztjhz",
"html_url": "https://github.com/ztjhz",
"followers_url": "https://api.github.com/users/ztjhz/followers",
"following_url": "https://api.github.com/users/ztjhz/following{/other_user}",
"gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions",
"organizations_url": "https://api.github.com/users/ztjhz/orgs",
"repos_url": "https://api.github.com/users/ztjhz/repos",
"events_url": "https://api.github.com/users/ztjhz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ztjhz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27071). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the erroneous reference from "encoder" to "decoder" in the llama docstring.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@zphang @ydshieh @abzdel
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27071/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27071",
"html_url": "https://github.com/huggingface/transformers/pull/27071",
"diff_url": "https://github.com/huggingface/transformers/pull/27071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27071.patch",
"merged_at": 1698250145000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27070/events
|
https://github.com/huggingface/transformers/issues/27070
| 1,961,665,595 |
I_kwDOCUB6oc507KQ7
| 27,070 |
[Maintenance] Unused dependency in Pytorch translation example
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey! Thanks for opening an issue 🤗 \r\nI understand your frustration, seems like this was added in #10971 to allow usage of certain datasets, which for me should be left to the user. \r\nBut I am not sure I understand why this should be changed in transformers and not the repo who use transformers 😅 \r\nLet's try to keep things as general as possible ",
"I understand. Hey can you please help me understand a detail ? 🙏🏻\r\n\r\nWhat Im taking is that it is used in a significant amount of datasets for not being removed, is this correct? ( EDITED for clarification : this vector thought comes under the premise of the willing of being _as general as possible_ , as it includes a dependency used in most datasets)\r\n\r\nIf I might ask, ⚗️⚗️ Why adding then `py7zr` dependency and not `datasets` then ? ⚗️⚗️\r\n(EDITED for clarification: Why do you add a dependency from another library? ) \r\n\r\n🙏🏻🙏🏻🙏🏻Thanks for having a look at this!🙏🏻🙏🏻🙏🏻",
"Hey ! \r\nI had a look and I got the why. I get that is related to examples with some datasets! \r\nTherefore I am closing the issue "
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
### System Info
```
`transformers` version: 4.34.1
Platform: macOS-13.4.1-arm64-arm-64bit
Python version: 3.10.10
Huggingface_hub version: 0.17.3
Safetensors version: 0.4.0
Accelerate version: 0.23.0
Accelerate config: not found
PyTorch version (GPU?): 2.1.0 (False)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: <fill in>
Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Context
Under maintenance of [this repository](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks) , that uses one of the [translation](https://github.com/huggingface/transformers/blob/244a53e0f6a8d95d429559cfc49a07a4e85cc680/examples/pytorch/translation/run_translation.py) examples from HF with Pytorch , and under the context of security and maintenance dependency analysis with [pip-rating](https://github.com/Nekmo/pip-rating)
Found that `py7zr` is no longer used for the example at hand im transformers stable version.
<img width="677" alt="Captura de pantalla 2023-10-25 a las 16 40 10" src="https://github.com/huggingface/transformers/assets/24204714/e9dff423-85f4-48c0-8983-f3b52fc20a1f">
## Analysis results
- The dependency file in translation example [requirements.txt](https://github.com/huggingface/transformers/blob/244a53e0f6a8d95d429559cfc49a07a4e85cc680/examples/pytorch/translation/requirements.txt) was created [2 years ago](https://github.com/huggingface/transformers/blame/244a53e0f6a8d95d429559cfc49a07a4e85cc680/examples/pytorch/translation/requirements.txt#L6)
- Test: The translation example worked without `py7zr` in MacOS with stable version of `transformers==4.34.1`
Also tested W&B integration.
- Regarding pip-rating understanding and further analysis : The project has reported [several vulnerabilities](https://github.com/miurahr/py7zr#security-notice) and last release was for a couple of months ago. There has been an active transparent communication by maintainers regarding the challenges with transversal attacks , which is very nice. But if the library releases are getting longer over time, it might be a good idea to delete it if that´s not used in the example.
## Potential fixes hypothesis
If you can confirm that this dependency has no longer impact on translation stable example, it might be useful to delete it, as the project doesn´t need it and the package current health is not optimal , happy to submit a PR!
Thanks for the time dedicated to this issue.
And thanks for making transformers!
### Expected behavior
Example working without `py7zr` dependency
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27069/events
|
https://github.com/huggingface/transformers/pull/27069
| 1,961,515,893 |
PR_kwDOCUB6oc5dwnRX
| 27,069 |
Handle unsharded Llama2 model types in conversion script
|
{
"login": "coreyhu",
"id": 4167654,
"node_id": "MDQ6VXNlcjQxNjc2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4167654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coreyhu",
"html_url": "https://github.com/coreyhu",
"followers_url": "https://api.github.com/users/coreyhu/followers",
"following_url": "https://api.github.com/users/coreyhu/following{/other_user}",
"gists_url": "https://api.github.com/users/coreyhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coreyhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coreyhu/subscriptions",
"organizations_url": "https://api.github.com/users/coreyhu/orgs",
"repos_url": "https://api.github.com/users/coreyhu/repos",
"events_url": "https://api.github.com/users/coreyhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/coreyhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"There's currently two options for unsharded `model_size`, 7B and 7Bf (which I assume represents 7B-chat). While we could alternatively remove the *-Bf options for `model_size`, it seems it's kept for backwards compatibility (line 23). Instead, this PR allows for both 7B and 7B-chat to be treated as unsharded.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27069). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the unsharded condition to check previously looked-up `num_shards` instead of model type. This is important because multiple model types are uncharded (7B and 7BF)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27069/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27069",
"html_url": "https://github.com/huggingface/transformers/pull/27069",
"diff_url": "https://github.com/huggingface/transformers/pull/27069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27069.patch",
"merged_at": 1698302467000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27068/events
|
https://github.com/huggingface/transformers/pull/27068
| 1,961,466,778 |
PR_kwDOCUB6oc5dwck3
| 27,068 |
[`Trainer` / `GC`] Add `gradient_checkpointing_kwargs` in trainer and training arguments
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Also why does it *partially* fix the issue",
"It partially fixes the issue because I need https://github.com/huggingface/peft/pull/1036 to be merged to fix the bug with respect to PEFT models"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Partially fixes: https://github.com/huggingface/trl/pull/912
Following https://github.com/huggingface/transformers/pull/27020 it is important to propagate `gradient_checkpointing_kwargs` in `Trainer` as well
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27068/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27068",
"html_url": "https://github.com/huggingface/transformers/pull/27068",
"diff_url": "https://github.com/huggingface/transformers/pull/27068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27068.patch",
"merged_at": 1698666108000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27067/events
|
https://github.com/huggingface/transformers/pull/27067
| 1,961,409,242 |
PR_kwDOCUB6oc5dwQPY
| 27,067 |
Created SECURITY.md
|
{
"login": "shettyvarshaa",
"id": 112955692,
"node_id": "U_kgDOBruRLA",
"avatar_url": "https://avatars.githubusercontent.com/u/112955692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shettyvarshaa",
"html_url": "https://github.com/shettyvarshaa",
"followers_url": "https://api.github.com/users/shettyvarshaa/followers",
"following_url": "https://api.github.com/users/shettyvarshaa/following{/other_user}",
"gists_url": "https://api.github.com/users/shettyvarshaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shettyvarshaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shettyvarshaa/subscriptions",
"organizations_url": "https://api.github.com/users/shettyvarshaa/orgs",
"repos_url": "https://api.github.com/users/shettyvarshaa/repos",
"events_url": "https://api.github.com/users/shettyvarshaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/shettyvarshaa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Yes, I thought of adding handling bug into security policy which is also mentioned in contributing.md, but also, creating security doc will complete your community profile to meet the standards defined for a Repo\r\n\r\n[Screenshot of this repo's current community profile](https://github.com/huggingface/transformers/assets/112955692/869974bf-6dac-4e5c-8422-f52cd316ef71)\r\n\r\n",
"No strong opinion, but will ask internally to have guidance regarding the bounty hunting program rather than bug fixed in this! ",
"https://github.com/huggingface/transformers/commit/15cd096288d369eb8b190432c04a588198d191a5 fixes this 😉 thanks for the tip 🤗 "
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
Created a Security Policy to enhance community profile of the repository and to prevent potential vulnerabilities
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24627
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Issue raised in the [forum](https://discuss.huggingface.co/t/security-policy/59844)
## Who can review?
CC : @stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27067/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27067",
"html_url": "https://github.com/huggingface/transformers/pull/27067",
"diff_url": "https://github.com/huggingface/transformers/pull/27067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27067.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27066/events
|
https://github.com/huggingface/transformers/pull/27066
| 1,961,216,750 |
PR_kwDOCUB6oc5dvlyp
| 27,066 |
docs: add addition library install
|
{
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @pphuc25 - our convention with the `examples` files is that we assume that the user has the latest version of Transformers through an [editable install](https://huggingface.co/docs/transformers/installation#editable-install). The `requirements.txt` files then specify the **additional** requirements needed to run the examples (e.g. `evaluate` or `jiwer`). So in this case, there's no need to include `transformers` or `tokenizers`, as they are installed when the user runs the editable installation! Hope that makes sense!",
"Thanks, this more make sense to me"
] | 1,698 | 1,700 | 1,700 |
CONTRIBUTOR
| null |
Hi,
Following the tutorial instructions, I discovered that the `requirements.txt` file is missing some libraries, such as `transformers`, and contains an incorrect version for `tokenizer`. Consequently, I added the missing library to the list.
I would like to cc @sanchit-gandhi to review my PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27066/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27066",
"html_url": "https://github.com/huggingface/transformers/pull/27066",
"diff_url": "https://github.com/huggingface/transformers/pull/27066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27066.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27065/events
|
https://github.com/huggingface/transformers/pull/27065
| 1,961,166,665 |
PR_kwDOCUB6oc5dvbAs
| 27,065 |
🌐 [i18n-ZH] Translate custom_models.md into Chinese
|
{
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27065). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Translate custom_models.md into Chinese
part of #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27065/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27065",
"html_url": "https://github.com/huggingface/transformers/pull/27065",
"diff_url": "https://github.com/huggingface/transformers/pull/27065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27065.patch",
"merged_at": 1698258032000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27064/events
|
https://github.com/huggingface/transformers/pull/27064
| 1,961,001,299 |
PR_kwDOCUB6oc5du3FL
| 27,064 |
Safetensors serialization by default
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Narsil, this is what is currently supported and not supported:\r\n\r\n- TF -TF - Supported, tested here: https://github.com/huggingface/transformers/blob/a79600fc26d57a9262ce069434e3bc0ea6b8ef47/tests/test_modeling_tf_utils.py#L501-L509\r\n- TF - Flax - **Not supported**\r\n- TF - Pt - Supported, tested here: https://github.com/huggingface/transformers/blob/a79600fc26d57a9262ce069434e3bc0ea6b8ef47/tests/test_modeling_tf_utils.py#L511-L522\r\n- Flax - Flax - Supported, tested here: https://github.com/huggingface/transformers/blob/a79600fc26d57a9262ce069434e3bc0ea6b8ef47/tests/test_modeling_flax_utils.py#L236-L244\r\n- Flax - TF - **Not supported**\r\n- Flax - Pt - Supported, tested here: https://github.com/huggingface/transformers/blob/a79600fc26d57a9262ce069434e3bc0ea6b8ef47/tests/test_modeling_flax_utils.py#L246-L256\r\n\r\n> From you initial comment I understand it's not possible, but it's not entirely clear for me as to why (you mention sharded weights, is it the only restriction? If yes, from what I read it should be okay-ish to be able to at least load for those, no ?)\r\n\r\nI mention this in the PR description:\r\n\r\n> TensorFlow models can load models in safetensors saved from PyTorch and TensorFlow, but it cannot load them from Flax. This can be eventually worked on; meanwhile, I'll write this in the docs with workarounds to get models saved in Flax to work in TensorFlow for those interested.\r\n\r\nIt should be pretty straightforward to enable it, but I suspect extremely little usage for a TF <> Flax conversion where no PyTorch conversion exists. I'm planning to add this to the documentation and IMO we can work on it afterwards if there are requests.",
"I will proceed to merge this and write a small explanatory doc tomorrow. I would like for the slow tests to run on this before the release.",
"Awesome ! Thanks a LOT for this."
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
This PR aims to do one thing but is larger than expected. I'm happy to break it down into smaller PRs if it helps for reviewing.
This PR aims to switch safe serialization to `True` by default for `torch` models. In doing so, it revealed a few bugs in the existing implementation and `safetensors` support that this PR fixes.
Additionally, support for `safetensors` for Flax models is added so that models saved from PyTorch after merging this PR can be used in both TensorFlow and Flax, and for models saved from TensorFlow/Flax to be loaded in PyTorch models.
The following should be worked on shortly to enable switching to safetensors by default for TensorFlow and Flax as well:
- There is no support for sharded weights in TensorFlow
- There is no support for sharded weights in Flax
Additionally, I'll contribute some documentation making the following clear:
- TensorFlow models can load models in safetensors saved from PyTorch and TensorFlow, but it cannot load them from Flax. This can be eventually worked on; meanwhile, I'll write this in the docs with workarounds to get models saved in Flax to work in TensorFlow for those interested.
- Same, but for Flax models loaded from TensorFlow
Thanks, @Rocketknight1, for the help on TensorFlow's side.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27064/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27064",
"html_url": "https://github.com/huggingface/transformers/pull/27064",
"diff_url": "https://github.com/huggingface/transformers/pull/27064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27064.patch",
"merged_at": 1698776209000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27063/events
|
https://github.com/huggingface/transformers/pull/27063
| 1,960,995,163 |
PR_kwDOCUB6oc5du1t1
| 27,063 |
[`docs`] Add `MaskGenerationPipeline` in docs
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
As requested on the Hub here: https://huggingface.co/facebook/sam-vit-base/discussions/3#65389abb67325b6218f92719 currently there is no documentation about `MaskGenerationPipeline` for SAM model
`MaskGenerationPipeline` was also missing on the main init.
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27063/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27063",
"html_url": "https://github.com/huggingface/transformers/pull/27063",
"diff_url": "https://github.com/huggingface/transformers/pull/27063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27063.patch",
"merged_at": 1698255096000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27062/events
|
https://github.com/huggingface/transformers/pull/27062
| 1,960,841,981 |
PR_kwDOCUB6oc5duUdX
| 27,062 |
Skip-test
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27062). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
COLLABORATOR
| null |
# What does this PR do?
Skip plbart test. Seems to start failing after #26752
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27062/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27062",
"html_url": "https://github.com/huggingface/transformers/pull/27062",
"diff_url": "https://github.com/huggingface/transformers/pull/27062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27062.patch",
"merged_at": 1698223653000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27061/events
|
https://github.com/huggingface/transformers/issues/27061
| 1,960,833,644 |
I_kwDOCUB6oc503_Js
| 27,061 |
ImportError: cannot import name 'SeamlessM4TModel' from 'transformers'
|
{
"login": "mwzkhalil",
"id": 77918472,
"node_id": "MDQ6VXNlcjc3OTE4NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/77918472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwzkhalil",
"html_url": "https://github.com/mwzkhalil",
"followers_url": "https://api.github.com/users/mwzkhalil/followers",
"following_url": "https://api.github.com/users/mwzkhalil/following{/other_user}",
"gists_url": "https://api.github.com/users/mwzkhalil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwzkhalil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwzkhalil/subscriptions",
"organizations_url": "https://api.github.com/users/mwzkhalil/orgs",
"repos_url": "https://api.github.com/users/mwzkhalil/repos",
"events_url": "https://api.github.com/users/mwzkhalil/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwzkhalil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"\r\n",
"A duplicate of #27036, please make sure you install form source 🤗 "
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
I encountered an ImportError when trying to import 'SeamlessM4TModel' from the 'transformers' library,
Code:
`from transformers import AutoProcessor, SeamlessM4TModel
processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-large")
model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-large")`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
nan
### Expected behavior
nan
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27061/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27060/events
|
https://github.com/huggingface/transformers/pull/27060
| 1,960,815,824 |
PR_kwDOCUB6oc5duOrz
| 27,060 |
Remove-auth-token
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,699 | 1,699 |
COLLABORATOR
| null |
# What does this PR do?
Replaces internal uses of the `use_auth_token` argument to `token`. Will remove the warning we get in most of the tests push to hub so that's removing pollution here and otherwise external warnings.
fixes #27049
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27060/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27060",
"html_url": "https://github.com/huggingface/transformers/pull/27060",
"diff_url": "https://github.com/huggingface/transformers/pull/27060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27060.patch",
"merged_at": 1699881654000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27059/events
|
https://github.com/huggingface/transformers/issues/27059
| 1,960,796,926 |
I_kwDOCUB6oc5032L-
| 27,059 |
Tracking full integration for fill-in-middle (FIM)
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 6126880899,
"node_id": "LA_kwDOCUB6oc8AAAABbTDIgw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/contributions-welcome",
"name": "contributions-welcome",
"color": "F99E09",
"default": false,
"description": ""
}
] |
open
| false | null |
[] |
[
"Hi @sayakpaul, I would like to work on this issue!",
"Awesome!\r\n\r\nPinging @ArthurZucker to check if we should start with pipeline or the example. ",
"Both are independent, not sure which one will be the easiest ",
"I will start with the example resource!\r\nDo you recommend I check out any existing resources examples/notebooks on FIM?\r\n\r\nThanks!",
"Yes!",
"Hi @sayakpaul, I was busy with some work hence starting this now.\r\n\r\nIs there a list of models in HF transformers that support FIM? Or perhaps a faster way of identifying which models support that (via a tag or something similar)?",
"Cc: @pacman100 ",
"Hi @sayakpaul, is there a set standard way of naming FIM tokens across tokenizers of all models that support this objective? (for example, is it certain that all prefix tokens will be `<fim_prefix>`, etc?)\r\n\r\nIf not, should I add the prefix, suffix and middle FIM token names as data arguments in the example script (and default to `<fim_prefix>`, `<fim_suffix>`, and `<fim_middle>` if not provided)?\r\n\r\nEDIT:\r\nIn the latter case, there are two possibilities: \r\n1. Either the tokenizer doesn't have those tokens (in which case we add these tokens to the tokenizer vocab and resize model embeddings)\r\n2. The user forgot to provide them (same treatment as the first case along with a warning?).",
"I have created a draft PR for the example. Also have started the work on the pipeline (will create a PR a little later).",
"You can check CodeLlama which sets the precedent for direct integration, but we can also compare starCoder, which does not have this directly in the tokenizers. Fine with setting new standards as the pipeline PR progresses! ",
"@ArthurZucker Cool! CodeLlama seems like a great option to start.",
"Hi, I would like to support on this. @tanaymeh @sayakpaul please let me know how can I support ",
"> Hi, I would like to support on this. @tanaymeh @sayakpaul please let me know how can I support \n\nThanks for offering help @codeserra, however I am working on implementing both the example and pipeline and don't need any help at this moment.\n\nCheers!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sayakpaul @ArthurZucker Can you please re-open this issue, I am still very much working on this, though a little swamped by exams this week!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Not stale."
] | 1,698 | 1,707 | null |
MEMBER
| null |
Given that most code LLMs are trained using the FIM objective [1], I think it makes a lot of sense to work on:
* A training example just like how we have it here: https://github.com/huggingface/transformers/tree/main/examples/pytorch. This can also be turned into a task guide later (example: https://huggingface.co/docs/transformers/tasks/language_modeling).
* A dedicated pipeline so that users can load FIM-trained models easily with `pipeline("fill-in-middle")`.
* A task page to list all the relevant resources.
Cc @ArthurZucker since we discussed it internally via Slack.
**References**
[1] Efficient Training of Language Models to Fill in the Middle, https://arxiv.org/abs/2207.14255.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27059/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27058/events
|
https://github.com/huggingface/transformers/issues/27058
| 1,960,701,677 |
I_kwDOCUB6oc503e7t
| 27,058 |
Add persistent_workers parameter to Dataloader
|
{
"login": "Sorrow321",
"id": 20703486,
"node_id": "MDQ6VXNlcjIwNzAzNDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sorrow321",
"html_url": "https://github.com/Sorrow321",
"followers_url": "https://api.github.com/users/Sorrow321/followers",
"following_url": "https://api.github.com/users/Sorrow321/following{/other_user}",
"gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions",
"organizations_url": "https://api.github.com/users/Sorrow321/orgs",
"repos_url": "https://api.github.com/users/Sorrow321/repos",
"events_url": "https://api.github.com/users/Sorrow321/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sorrow321/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr ",
"A PR for this would be great @Sorrow321! :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
CONTRIBUTOR
| null |
### Feature request
**persistent_workers** is a parameter that can be passed to Dataloader. It's set to **False** by default. What it does ([quote](https://discuss.pytorch.org/t/what-are-the-dis-advantages-of-persistent-workers/102110/2)):
> With this option to false, every time your code hits a line line for sample in dataloader:, it will create a brand new set of workers to do this loading and will kill them on exit.
> Meaning that if you have multiple dataloaders, the workers will be killed when you are done with one instantly.
>
> If you make them persist, these workers will stay around (with their state) waiting for another call into that dataloader.
>
> Setting this to True will improve performances when you call into the dataloader multiple times in a row (as creating the workers is expensive). But it also means that the dataloader will have some persistent state even when it is not used (which can use some RAM depending on your dataset).
Currently there is no option to pass **persistent_workers=True** to internal Dataloader without inheriting from **Trainer** class. Here is the code snippet from **trainer.py**:
```
def get_train_dataloader(self) -> DataLoader:
"""
Returns the training [`~torch.utils.data.DataLoader`].
Will use no sampler if `train_dataset` does not implement `__len__`, a random sampler (adapted to distributed
training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
"""
# ...
# (some code)
# ...
return DataLoader(
train_dataset,
batch_size=self._train_batch_size,
sampler=train_sampler,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
worker_init_fn=seed_worker,
)
```
I suggest to add this parameter to **TrainingArguments** class. Set it to **False** by default to not break anyone's pipeline.
### Motivation
Motivation is very simple: setting this parameter to **True** can speed up training process in some cases (but will require more RAM).
### Your contribution
I can make the PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27058/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27057/events
|
https://github.com/huggingface/transformers/issues/27057
| 1,960,502,763 |
I_kwDOCUB6oc502uXr
| 27,057 |
Cross Entropy Loss Ignore Index in BARTForConditionalGeneration
|
{
"login": "dipta007",
"id": 13894030,
"node_id": "MDQ6VXNlcjEzODk0MDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13894030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipta007",
"html_url": "https://github.com/dipta007",
"followers_url": "https://api.github.com/users/dipta007/followers",
"following_url": "https://api.github.com/users/dipta007/following{/other_user}",
"gists_url": "https://api.github.com/users/dipta007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dipta007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipta007/subscriptions",
"organizations_url": "https://api.github.com/users/dipta007/orgs",
"repos_url": "https://api.github.com/users/dipta007/repos",
"events_url": "https://api.github.com/users/dipta007/events{/privacy}",
"received_events_url": "https://api.github.com/users/dipta007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, the default ignore index of the cross entropy loss is `-100` which is what we use. \r\nIf you follow the docstring, the labels are supposed to be given by the user:\r\n\r\n> labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\r\n> Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[-100, 0, ...,\r\n> config.vocab_size - 1]`. All labels set to ``-100`` are ignored (masked), the loss is only computed for\r\n> labels in ``[0, ..., config.vocab_size]``\r\n\r\nThe huggingface [datacollator](https://github.com/huggingface/transformers/blob/main/src/transformers/data/data_collator.py#L556) usually does this. \r\n\r\nThis gives more freedom to the user. ",
"Thanks @ArthurZucker. So, I misunderstood. \r\nIf I understood it correctly, if I only use BARTTokenizer, then I have to first replace all the `pad_tokens` with `-100` before passing it to the model. Is that correct?\r\n",
"Hi @dipta007 , yes that can also do the trick, you can also use the `shift_tokens_right` method: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L74 that will properly shift your tokens to the right and correctly compute the loss",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 |
NONE
| null |
### System Info
This bug doesn't depend on the environment.
### Who can help?
@ArthurZucker @younesbelkada
In the [code](https://github.com/huggingface/transformers/blob/6cbc1369a330860c128a1ba365f246751382c9e5/src/transformers/models/bart/modeling_bart.py#L1412), BARTForConditionalGeneration used the default cross-entropy loss, where the ignore_index is -100. To my understanding, the ignore index should be the pad_index, which is 1. I am not sure if I am missing something.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model_name = "facebook/bart-base"
model = BartForConditionalGeneration.from_pretrained(model_name)
tokenizer = BartTokenizer.from_pretrained(model_name)
input = [
"Long live America.",
"I love Huggingface as it has made the ML development so easy."
]
input_tok = tokenizer(input, return_tensors='pt', padding=True)
out = model(input_tok['input_ids'], labels=input_tok['input_ids'])
default_loss = out.loss
criterion = torch.nn.CrossEntropyLoss(ignore_index=model.config.pad_token_id)
should_be_loss = criterion(out.logits.view(-1, out.logits.size(-1)), input_tok['input_ids'].view(-1))
print(default_loss.item(), should_be_loss.item())
```
### Expected behavior
The loss should be `0.12098739296197891`
But the loss comes with the default loss output `4.7771992683410645`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27057/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27056/events
|
https://github.com/huggingface/transformers/issues/27056
| 1,960,499,933 |
I_kwDOCUB6oc502trd
| 27,056 |
Output from LLAMA FlashAttention2 is different from that without FlashAttention2
|
{
"login": "jeannefukumaru",
"id": 26344607,
"node_id": "MDQ6VXNlcjI2MzQ0NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/26344607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeannefukumaru",
"html_url": "https://github.com/jeannefukumaru",
"followers_url": "https://api.github.com/users/jeannefukumaru/followers",
"following_url": "https://api.github.com/users/jeannefukumaru/following{/other_user}",
"gists_url": "https://api.github.com/users/jeannefukumaru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeannefukumaru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeannefukumaru/subscriptions",
"organizations_url": "https://api.github.com/users/jeannefukumaru/orgs",
"repos_url": "https://api.github.com/users/jeannefukumaru/repos",
"events_url": "https://api.github.com/users/jeannefukumaru/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeannefukumaru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Seems like a duplicate of #27056 and #26697",
"Thanks for the reply @ArthurZucker . Upgrading the flash attention version worked! ",
"Closing issue for now"
] | 1,698 | 1,698 | 1,698 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.3
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using FlashAttentionv2 and BetterTransformers that just came out with transformers v4.34. ATM when I enable FlashAttention using the code snippet below, I see that:
- The output increases from ~1k tokens to ~3k tokens
- All of the output becomes repetitive gibberish. But without FlashAttention v2 and BetterTransformers I get reasonable answers.
Why would FlashAttention appear to cause this behaviour change. IIUC FlashAttention mainly improves handling longer contexts and makes computation more efficient, which shouldn't affect model quality.
```
model = AutoModelForCausalLM.from_pretrained(
model_name,
#quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
# load_in_8bit=True,
device_map = {"": Accelerator().process_index},
trust_remote_code=True,)
use_flash_attention_2=True)
).to_bettertransformer()
```
```
''' Test prompt '''
# from https://github.com/facebookresearch/llama-recipes/blob/main/examples/chat_completion/chats.json
prompt = [[
{"role": "user", "content": "I am going to Paris, what should I see?"},
{
"role": "assistant",
"content": "Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city. 2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa. 3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."
},
{"role": "user", "content": "What is so great about #1?"}
]]
```
```
import json
from typing import List, Literal, TypedDict
Role = Literal["user", "assistant"]
class Message(TypedDict):
role: Role
content: str
Dialog = List[Message]
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
def format_tokens(dialogs, tokenizer):
prompt_tokens = []
for dialog in dialogs:
if dialog[0]["role"] == "system":
dialog = [
{
"role": dialog[1]["role"],
"content": B_SYS
+ dialog[0]["content"]
+ E_SYS
+ dialog[1]["content"],
}
] + dialog[2:]
assert all([msg["role"] == "user" for msg in dialog[::2]]) and all(
[msg["role"] == "assistant" for msg in dialog[1::2]]
), (
"model only supports 'system','user' and 'assistant' roles, "
"starting with user and alternating (u/a/u/a/u...)"
)
"""
Please verify that your tokenizer support adding "[INST]", "[/INST]" to your inputs.
Here, we are adding it manually.
"""
dialog_tokens: List[int] = sum(
[
tokenizer.encode(
f"{B_INST} {(prompt['content']).strip()} {E_INST} {(answer['content']).strip()} ",
) + [tokenizer.eos_token_id]
for prompt, answer in zip(dialog[::2], dialog[1::2])
],
[],
)
assert (
dialog[-1]["role"] == "user"
), f"Last message must be from user, got {dialog[-1]['role']}"
dialog_tokens += tokenizer.encode(
f"{B_INST} {(dialog[-1]['content']).strip()} {E_INST}",
)
prompt_tokens.append(dialog_tokens)
return prompt_tokens
```
```
''' Tokenize prompt '''
# from chat_utils import format_tokens
tokens = format_tokens(prompt, tokenizer)[0]
tokens = torch.tensor(tokens).long()
tokens = tokens.unsqueeze(0)
tokens = tokens.to("cuda:0")
```
```
num_responses= 32
# time generate function for responses
import time
t0 = time.time()
output = model.generate(tokens.repeat([1,1]), max_new_tokens=256, num_return_sequences=num_responses, do_sample=True, top_k=5, top_p=0.9,pad_token_id=tokenizer.eos_token_id)
t1 = time.time()
total = t1-t0
print("generate total time:", np.round(total, 2), "secs")
print_result = True
if print_result:
for i in range(len(output)):
print("\n\nOutput:", i+1, "---------------------------------------------------------\n")
output_text = tokenizer.decode(output[i], skip_special_tokens=True, early_stopping=True)
print(f"length of output: {len(output_text)}")
if i == 0:
print(output_text.split('[/INST]')[0])
print('\n\n')
print(output_text.split('[/INST]')[-1])
```
### Expected behavior
Expected output:
```
Output: 1 ---------------------------------------------------------
length of output: 1932
[INST] I am going to Paris, what should I see?
The Eiffel Tower is considered one of the most iconic landmarks in Paris and one of the most recognizable symbols of France. Here are some reasons why it's so great:
1. Unique Design: The Eiffel Tower is an engineering marvel with its lattice-style design, which was revolutionary for its time. It was designed by Gustave Eiffel and his company for the 1889 World's Fair, held in Paris.
2. Breathtaking Views: The Eiffel Tower offers panoramic views of the city from its observation decks on the first and second levels. Visitors can see many of Paris's famous landmarks, such as the Arc de Triomphe, the Champs-Élysées, and the Seine River.
3. Historical Significance: The Eiffel Tower was a symbol of French engineering and innovation during the late 19th century. It was also a symbol of French culture and art, as it was the tallest structure in the world at the time of its construction.
4. Romantic Atmosphere: The Eiffel Tower is often associated with romance, thanks to its
```
Actual output:
```
Output: 1 ---------------------------------------------------------
length of output: 3584
[INST] I am going to Paris, what should I see?
nobody nobody Unterscheidung everybody everybody hopefully Unterscheidung hopefully everybody Unterscheidung Unterscheidung hopefully nobody nobody hopefully Hinweis hopefully hopefully nobody Unterscheidung nobody Unterscheidung Unterscheidung nobody everybody Unterscheidung nobody Unterscheidung everybody nobody hopefully nobody nobody nobody Unterscheidung everybody hopefully Unterscheidung hopefully nobody Unterscheidung hopefully hopefully Unterscheidung nobody everybody nobody Unterscheidung nobody nobody nobody Unterscheidung nobody nobody Unterscheidung everybody nobody nobody hopefully nobody nobody everybody hopefully everybody hopefully nobody everybody Unterscheidung Unterscheidung Unterscheidung Unterscheidung nobody hopefully Unterscheidung Unterscheidung Unterscheidung nobody everybody everybody Unterscheidung nobody nobody nobody nobody nobody nobody Unterscheidung Unterscheidung nobody nobody hopefully Unterscheidung Unterscheidung nobody nobody Unterscheidung everybody nobody Unterscheidung Unterscheidung nobody nobody Unterscheidung Unterscheidung everybody Unterscheidung nobody hopefully Unterscheidung nobody nobody everybody nobody hopefully nobody Unterscheidung hopefully Unterscheidung everybody nobody hopefully everybody Unterscheidung nobody nobody nobody everybody Unterscheidung nobody nobody nobody everybody nobody nobody nobody Unterscheidung nobody Unterscheidung Unterscheidung hopefully nobody Unterscheidung hopefully nobody hopefully nobody nobody Unterscheidung everybody hopefully Unterscheidung Unterscheidung Unterscheidung nobody nobody Unterscheidung everybody nobody everybody Unterscheidung hopefully everybody nobody Unterscheidung Unterscheidung nobody nobody
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27056/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27055/events
|
https://github.com/huggingface/transformers/pull/27055
| 1,960,463,059 |
PR_kwDOCUB6oc5dtCJB
| 27,055 |
Fix typo in warning message
|
{
"login": "liuxueyang",
"id": 3584877,
"node_id": "MDQ6VXNlcjM1ODQ4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3584877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuxueyang",
"html_url": "https://github.com/liuxueyang",
"followers_url": "https://api.github.com/users/liuxueyang/followers",
"following_url": "https://api.github.com/users/liuxueyang/following{/other_user}",
"gists_url": "https://api.github.com/users/liuxueyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuxueyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuxueyang/subscriptions",
"organizations_url": "https://api.github.com/users/liuxueyang/orgs",
"repos_url": "https://api.github.com/users/liuxueyang/repos",
"events_url": "https://api.github.com/users/liuxueyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuxueyang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It's not a typo. I guess this is a missing change in the PR #18492 ",
"Already rebased and CI passed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for the delay I'll merge 😉 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27055). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,700 | 1,700 |
CONTRIBUTOR
| null |
Fix a typo in the warning message. The value of `default_cache_path` is `~/.cache/huggingface/hub` not `~/.cache/huggingface/transformers`.
v4.22.0 is the earlist version that contains those changes in PR https://github.com/huggingface/transformers/pull/18492
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27055/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27055",
"html_url": "https://github.com/huggingface/transformers/pull/27055",
"diff_url": "https://github.com/huggingface/transformers/pull/27055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27055.patch",
"merged_at": 1700825045000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27054/events
|
https://github.com/huggingface/transformers/issues/27054
| 1,960,448,576 |
I_kwDOCUB6oc502hJA
| 27,054 |
/lib64/libcrypto.so.10: version `OPENSSL_1.0.2' not found when install transformers using conda
|
{
"login": "liumilan",
"id": 5533901,
"node_id": "MDQ6VXNlcjU1MzM5MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5533901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liumilan",
"html_url": "https://github.com/liumilan",
"followers_url": "https://api.github.com/users/liumilan/followers",
"following_url": "https://api.github.com/users/liumilan/following{/other_user}",
"gists_url": "https://api.github.com/users/liumilan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liumilan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liumilan/subscriptions",
"organizations_url": "https://api.github.com/users/liumilan/orgs",
"repos_url": "https://api.github.com/users/liumilan/repos",
"events_url": "https://api.github.com/users/liumilan/events{/privacy}",
"received_events_url": "https://api.github.com/users/liumilan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,698 | 1,698 | 1,698 |
NONE
| null |
conda create -n transformer python=3.8.0
conda activate transformer
conda install -c huggingface transformers
conda install PyTorch=1.10
conda install -c anaconda openssl
and when i run from from transformers import pipeline ,it will report
/lib64/libcrypto.so.10: version `OPENSSL_1.0.2' not found (required by /home/work/anaconda3/envs/transformer/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)
how to fix it
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27054/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27053/events
|
https://github.com/huggingface/transformers/issues/27053
| 1,960,339,689 |
I_kwDOCUB6oc502Gjp
| 27,053 |
Number of tokens mismatch for Codellama-34b-hf
|
{
"login": "irenedea",
"id": 14367635,
"node_id": "MDQ6VXNlcjE0MzY3NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/irenedea",
"html_url": "https://github.com/irenedea",
"followers_url": "https://api.github.com/users/irenedea/followers",
"following_url": "https://api.github.com/users/irenedea/following{/other_user}",
"gists_url": "https://api.github.com/users/irenedea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/irenedea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/irenedea/subscriptions",
"organizations_url": "https://api.github.com/users/irenedea/orgs",
"repos_url": "https://api.github.com/users/irenedea/repos",
"events_url": "https://api.github.com/users/irenedea/events{/privacy}",
"received_events_url": "https://api.github.com/users/irenedea/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey @ArthurZucker , would you be able to help with this? I don't think https://github.com/huggingface/transformers/issues/26156 was fully resolved. Thank you!",
"Hey, yes for sure! ",
"Hey @ArthurZucker any ETA on when this might be resolved?",
"@dakinggg Arthur is off for a week. If someone in the community wants to open a PR to resolve this before then feel free to ping me for a review. ",
"FYI for anyone else who ends up here, I've worked around this by initializing the tokenizer like so:\r\n`t = transformers.AutoTokenizer.from_pretrained('codellama/CodeLlama-7b-hf, prefix_token=None, middle_token=None, suffix_token=None, eot_token=None, fill_token=None)`",
"Yep that's the fix, I'm gonna push it to the hub today! 😉 Just gotta make sure this errors out properly on the FMI ",
"[PR](https://huggingface.co/codellama/CodeLlama-34b-hf/discussions/23), you can use the reference head for now, I just have to test Forward Compatibility and Backward compatibility 😉 \r\nEDIT: it's not a backward compatible change, because before 4.34.0, having non fill tokens would trigger this:\r\n```python \r\n----> 1 tokenizer.tokenize(\"Hey</s>sir\")\r\n\r\nFile ~/miniforge3/envs/4.29/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py:344, in PreTrainedTokenizerFast.tokenize(self, text, pair, add_special_tokens, **kwargs)\r\n 343 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:\r\n--> 344 return self.encode_plus(text=text, text_pair=pair, add_special_tokens=add_special_tokens, **kwargs).tokens()\r\n\r\nFile ~/miniforge3/envs/4.29/lib/python3.9/site-packages/transformers/models/code_llama/tokenization_code_llama_fast.py:298, in CodeLlamaTokenizerFast.encode_plus(self, text, text_pair, suffix_first, add_special_tokens, **kwargs)\r\n 295 def encode_plus(self, text, text_pair=None, suffix_first=False, add_special_tokens=True, **kwargs):\r\n 296 # hack to make sure the input is pre-process but outside rust\r\n 297 text_pair = kwargs.pop(\"suffix\", text_pair)\r\n--> 298 if self.fill_token in text and text_pair is None:\r\n 299 text, text_pair = text.split(self.fill_token)\r\n 301 if text_pair is None or len(text_pair) < 1:\r\n```\r\nI'd rather we keep the \"wrong\" tokenizer length than break for other. Hope it makes sense for you and the fix is exactly what @dakinggg proposed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,702 | 1,702 |
NONE
| null |
### System Info
transformers 4.34.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, PretrainedConfig, CodeLlamaTokenizerFast
model = 'codellama/CodeLlama-34b-hf'
tokenizer = AutoTokenizer.from_pretrained(model)
config = PretrainedConfig.from_pretrained(model)
assert len(tokenizer) <= config.vocab_size, f'Got tokenizer size {len(tokenizer)} and vocab size {config.vocab_size}'
```
### Expected behavior
len(tokenizer) is expected to be less than or equal to config.vocab_size, but we get: `AssertionError: Got tokenizer size 32004 and vocab size 32000`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27053/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27052/events
|
https://github.com/huggingface/transformers/pull/27052
| 1,960,315,590 |
PR_kwDOCUB6oc5dsjty
| 27,052 |
Persimmon fa2 attention4d
|
{
"login": "jeromeku",
"id": 2455711,
"node_id": "MDQ6VXNlcjI0NTU3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeromeku",
"html_url": "https://github.com/jeromeku",
"followers_url": "https://api.github.com/users/jeromeku/followers",
"following_url": "https://api.github.com/users/jeromeku/following{/other_user}",
"gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions",
"organizations_url": "https://api.github.com/users/jeromeku/orgs",
"repos_url": "https://api.github.com/users/jeromeku/repos",
"events_url": "https://api.github.com/users/jeromeku/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeromeku/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@younesbelkada \r\n\r\nlmk if my implementation of 4d attention mask (#26792) + FA2 needs tweaking.\r\n\r\nRegarding previous [comment](https://github.com/huggingface/transformers/issues/26350?utm_source=tldrai#issuecomment-1745613947), I'd like to understand HF's current strategy for integrating third-party / OSS libraries and components. Given the rapid pace of innovation in this space, want to ensure that `transformers` and its sister libraries remain best-in-class wrt usability and performance! ",
"Thanks very much for your great contrib @jeromeku ! Sorry for the delay responding on the PR, I will have an extensive look at the PR and your questions by beginning of next week (from 30rd october) 🙏 ",
"Hi! great work and I don't mean to butt in here, but in case it helps take this home:\r\n\r\nI was trying to get this to work and ran into some issues with the latest (4.36.dev0) version of transformers after cloning this pr and rebasing on main. I had to do this because of the llama2 tokenizer/optimum import issue that I get using the transformers version **as is** verbatim on this pr.\r\n\r\nAfter scouring gh, I came across a fine-tuning repo for Fuyu, and the same author has a working version of FA2 for persimmon (I was able to train persimmon on it with FA2): \r\n- https://github.com/phillip-kravtsov/transformers/tree/floating-updates\r\n\r\n```sh\r\npip install git+https://github.com/phillip-kravtsov/transformers.git@floating-updates\r\n```\r\n\r\nThis is experimental and the work of one dude, so for the record the working SHA is: `b8000bd8d619cbbedcb806b67faa68c2300b4bd0`\r\n\r\n\r\nhope this helps!",
"@younesbelkada \r\n\r\nLet me know how I can improve the PR. Also, would appreciate thoughts on previous [query](https://github.com/huggingface/transformers/issues/26350#issuecomment-1745613947) when you get a chance.\r\n\r\nThanks!",
"@younesbelkada \r\n\r\nTrying to get to the bottom of the issue:\r\n- Created a fresh `venv` and cloned the `transformers` repo as of `11/20/2023` and did a `pip install -e .[dev]`\r\n- Installed `flash_attn`:\r\n```\r\nName: flash-attn\r\nVersion: 2.3.4\r\nSummary: Flash Attention: Fast and Memory-Efficient Exact Attention\r\nHome-page: https://github.com/Dao-AILab/flash-attention\r\nAuthor: Tri Dao\r\nAuthor-email: trid@cs.stanford.edu\r\nLicense: \r\nLocation: /notebooks/virtualenvs/transformers-test/lib/python3.9/site-packages\r\nRequires: einops, ninja, packaging, torch\r\nRequired-by: \r\nMetadata-Version: 2.1\r\nInstaller: pip\r\nClassifiers:\r\n Programming Language :: Python :: 3\r\n License :: OSI Approved :: BSD License\r\n Operating System :: Unix\r\nEntry-points:\r\nProject-URLs:\r\n```\r\n- Ran `attn_2` tests for the following models (only showing failures):\r\n\r\n`Whisper`\r\n```\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_flash_attn_2_inference - AssertionError: assert False\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference - AssertionError: assert False\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference_padding_right - AssertionError: assert False\r\n==============\r\n```\r\n `Mistral`\r\n```\r\nFAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_flash_attn_2_generate_padding_right - AssertionError: ValueError not raised\r\nFAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_flash_attn_2_inference_padding_right - AssertionError: ValueError not raised\r\n```\r\n`Bark`\r\n```\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkSemanticModelTest::test_flash_attn_2_fp32_ln - RuntimeError: FlashAttention only support fp16 and bf16 data type\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkSemanticModelTest::test_flash_attn_2_from_config - ValueError: Unrecognized configuration class <class 'transformers.models.bark.configuration_bark.BarkSemanticConfig'> for this kind of AutoModel: ...\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkCoarseModelTest::test_flash_attn_2_fp32_ln - RuntimeError: FlashAttention only support fp16 and bf16 data type\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkCoarseModelTest::test_flash_attn_2_from_config - ValueError: Unrecognized configuration class <class 'transformers.models.bark.configuration_bark.BarkCoarseConfig'> for this kind of AutoModel: Au...\r\n```\r\n`GPTNeo`\r\n```\r\nFAILED tests/models/gpt_neo/test_modeling_gpt_neo.py::GPTNeoModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\n```\r\n`Llama`, `Distillbert`, `GPT BigCode` tests all pass. \r\n`Falcon` I get `cuda device-side error`, which might be due to the fact that I'm running on an `A6000` (48Gb memory) which might not be sufficient. \r\n\r\nThoughts?\r\n\r\nFWIW, here's the output of `pip freeze`:\r\n```\r\nabsl-py==2.0.0\r\naccelerate==0.24.1\r\naiohttp==3.9.0\r\naiosignal==1.3.1\r\nalembic==1.12.1\r\nansi2html==1.8.0\r\nAPScheduler==3.10.4\r\narrow==1.3.0\r\nastunparse==1.6.3\r\nasync-timeout==4.0.3\r\nattrs==23.1.0\r\naudioread==3.0.1\r\nav==9.2.0\r\nBabel==2.13.1\r\nbackoff==1.11.1\r\nbeautifulsoup4==4.12.2\r\nbinaryornot==0.4.4\r\nbitsandbytes==0.41.2.post2\r\nblinker==1.7.0\r\ncachetools==5.3.2\r\ncertifi==2023.11.17\r\ncffi==1.16.0\r\nchardet==5.2.0\r\ncharset-normalizer==3.3.2\r\nchex==0.1.82\r\nclick==8.1.7\r\nclldutils==3.20.0\r\ncodecarbon==1.2.0\r\ncolorama==0.4.6\r\ncolorlog==6.7.0\r\ncookiecutter==1.7.3\r\ncsvw==3.2.1\r\ndash==2.14.1\r\ndash-bootstrap-components==1.5.0\r\ndash-core-components==2.0.0\r\ndash-html-components==2.0.0\r\ndash-table==5.0.0\r\ndatasets==2.15.0\r\ndecorator==5.1.1\r\ndecord==0.6.0\r\ndill==0.3.4\r\ndlinfo==1.2.1\r\ndm-tree==0.1.8\r\neinops==0.7.0\r\netils==1.5.2\r\nevaluate==0.4.1\r\nexceptiongroup==1.1.3\r\nexecnet==2.0.2\r\nfaiss-cpu==1.7.4\r\nfastjsonschema==2.19.0\r\nfilelock==3.13.1\r\nfire==0.5.0\r\nflash-attn==2.3.4\r\nFlask==3.0.0\r\nflatbuffers==23.5.26\r\nflax==0.7.0\r\nfrozenlist==1.4.0\r\nfsspec==2023.10.0\r\nfugashi==1.3.0\r\ngast==0.5.4\r\ngitdb==4.0.11\r\nGitPython==3.1.18\r\ngoogle-auth==2.23.4\r\ngoogle-auth-oauthlib==1.1.0\r\ngoogle-pasta==0.2.0\r\ngql==3.4.1\r\ngraphql-core==3.2.3\r\ngreenlet==3.0.1\r\ngrpcio==1.59.3\r\nh5py==3.10.0\r\nhf-doc-builder==0.4.0\r\nhuggingface-hub==0.19.4\r\nhypothesis==6.90.0\r\nidna==3.4\r\nimportlib-metadata==6.8.0\r\nimportlib-resources==6.1.1\r\niniconfig==2.0.0\r\nipadic==1.0.0\r\nisodate==0.6.1\r\nisort==5.12.0\r\nitsdangerous==2.1.2\r\njax==0.4.13\r\njaxlib==0.4.13\r\nJinja2==3.1.2\r\njinja2-time==0.2.0\r\njoblib==1.3.2\r\njsonschema==4.20.0\r\njsonschema-specifications==2023.11.1\r\njupyter_core==5.5.0\r\nkenlm==0.2.0\r\nkeras==2.15.0\r\nkeras-core==0.1.7\r\nkeras-nlp==0.6.3\r\nlanguage-tags==1.2.0\r\nlazy_loader==0.3\r\nlibclang==16.0.6\r\nlibrosa==0.10.1\r\nllvmlite==0.41.1\r\nlxml==4.9.3\r\nMako==1.3.0\r\nMarkdown==3.5.1\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.3\r\nmdurl==0.1.2\r\nml-dtypes==0.2.0\r\nmpmath==1.3.0\r\nmsgpack==1.0.7\r\nmultidict==6.0.4\r\nmultiprocess==0.70.12.2\r\nnamex==0.0.7\r\nnbformat==5.9.2\r\nnest-asyncio==1.5.8\r\nnetworkx==3.2.1\r\nninja==1.11.1.1\r\nnltk==3.8.1\r\nnumba==0.58.1\r\nnumpy==1.26.2\r\nnvidia-cublas-cu12==12.1.3.1\r\nnvidia-cuda-cupti-cu12==12.1.105\r\nnvidia-cuda-nvrtc-cu12==12.1.105\r\nnvidia-cuda-runtime-cu12==12.1.105\r\nnvidia-cudnn-cu12==8.9.2.26\r\nnvidia-cufft-cu12==11.0.2.54\r\nnvidia-curand-cu12==10.3.2.106\r\nnvidia-cusolver-cu12==11.4.5.107\r\nnvidia-cusparse-cu12==12.1.0.106\r\nnvidia-nccl-cu12==2.18.1\r\nnvidia-nvjitlink-cu12==12.3.101\r\nnvidia-nvtx-cu12==12.1.105\r\noauthlib==3.2.2\r\nonnx==1.15.0\r\nonnxconverter-common==1.13.0\r\nopt-einsum==3.3.0\r\noptax==0.1.4\r\noptuna==3.4.0\r\norbax-checkpoint==0.4.3\r\npackaging==23.2\r\npandas==2.1.3\r\nparameterized==0.9.0\r\nphonemizer==3.2.1\r\nPillow==9.5.0\r\nplac==1.4.1\r\nplatformdirs==4.0.0\r\nplotly==5.18.0\r\npluggy==1.3.0\r\npooch==1.8.0\r\nportalocker==2.0.0\r\npoyo==0.5.0\r\nprotobuf==3.20.3\r\npsutil==5.9.6\r\npy-cpuinfo==9.0.0\r\npyarrow==14.0.1\r\npyarrow-hotfix==0.5\r\npyasn1==0.5.1\r\npyasn1-modules==0.3.0\r\npycparser==2.21\r\npyctcdecode==0.5.0\r\npydantic==1.10.13\r\nPygments==2.17.1\r\npygtrie==2.5.0\r\npylatexenc==2.10\r\npynvml==11.5.0\r\npyparsing==3.1.1\r\npypng==0.20220715.0\r\npytest==7.4.3\r\npytest-timeout==2.2.0\r\npytest-xdist==3.4.0\r\npython-dateutil==2.8.2\r\npython-slugify==8.0.1\r\npytz==2023.3.post1\r\nPyYAML==6.0.1\r\nray==2.8.0\r\nrdflib==7.0.0\r\nreferencing==0.31.0\r\nregex==2023.10.3\r\nrequests==2.31.0\r\nrequests-oauthlib==1.3.1\r\nrequests-toolbelt==0.10.1\r\nresponses==0.18.0\r\nretrying==1.3.4\r\nrfc3986==1.5.0\r\nrhoknp==1.3.0\r\nrich==13.7.0\r\nrjieba==0.1.11\r\nrouge-score==0.1.2\r\nrpds-py==0.13.1\r\nrsa==4.9\r\nruff==0.1.6\r\nsacrebleu==1.5.1\r\nsacremoses==0.1.1\r\nsafetensors==0.4.0\r\nscikit-learn==1.3.2\r\nscipy==1.11.4\r\nsegments==2.2.1\r\nsentencepiece==0.1.99\r\nsigopt==8.8.2\r\nsix==1.16.0\r\nsmmap==5.0.1\r\nsortedcontainers==2.4.0\r\nsoundfile==0.12.1\r\nsoupsieve==2.5\r\nsoxr==0.3.7\r\nSQLAlchemy==2.0.23\r\nSudachiDict-core==20230927\r\nSudachiPy==0.6.7\r\nsympy==1.12\r\ntabulate==0.9.0\r\ntenacity==8.2.3\r\ntensorboard==2.15.1\r\ntensorboard-data-server==0.7.2\r\ntensorboardX==2.6.2.2\r\ntensorflow==2.15.0\r\ntensorflow-estimator==2.15.0\r\ntensorflow-hub==0.15.0\r\ntensorflow-io-gcs-filesystem==0.34.0\r\ntensorflow-text==2.15.0\r\ntensorstore==0.1.45\r\ntermcolor==2.3.0\r\ntext-unidecode==1.3\r\ntf2onnx==1.15.1\r\nthreadpoolctl==3.2.0\r\ntimeout-decorator==0.5.0\r\ntimm==0.9.11\r\ntokenizers==0.15.0\r\ntomli==2.0.1\r\ntoolz==0.12.0\r\ntorch==2.1.1\r\ntorchaudio==2.1.1\r\ntorchvision==0.16.1\r\ntqdm==4.66.1\r\ntraitlets==5.13.0\r\n-e git+https://github.com/huggingface/transformers@38e2633f80a4924bf613b0240622492beee4cfcc#egg=transformers\r\ntriton==2.1.0\r\ntypes-python-dateutil==2.8.19.14\r\ntyping_extensions==4.8.0\r\ntzdata==2023.3\r\ntzlocal==5.2\r\nunidic==1.1.0\r\nunidic-lite==1.0.8\r\nuritemplate==4.1.1\r\nurllib3==1.26.18\r\nwasabi==0.10.1\r\nWerkzeug==3.0.1\r\nwrapt==1.14.1\r\nxxhash==3.4.1\r\nyarl==1.9.3\r\nzipp==3.17.0\r\n```",
"Hi @jeromeku, the `test_flash_attn_2_generate_padding_right` for `GptNeo` is quite flaky, most of the time it passes but sometimes it fails. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,703 | 1,703 |
NONE
| null |
# What does this PR do?
Adds Flash Attention 2 for Persimmon per #26350
Adds 2d->4d attention mask per #26792
## Who can review?
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27052/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27052",
"html_url": "https://github.com/huggingface/transformers/pull/27052",
"diff_url": "https://github.com/huggingface/transformers/pull/27052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27052.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/27051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27051/events
|
https://github.com/huggingface/transformers/pull/27051
| 1,960,248,672 |
PR_kwDOCUB6oc5dsWcq
| 27,051 |
Bugfix / ffmpeg input device (mic) not working on Windows
|
{
"login": "Teapack1",
"id": 104318527,
"node_id": "U_kgDOBjfGPw",
"avatar_url": "https://avatars.githubusercontent.com/u/104318527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Teapack1",
"html_url": "https://github.com/Teapack1",
"followers_url": "https://api.github.com/users/Teapack1/followers",
"following_url": "https://api.github.com/users/Teapack1/following{/other_user}",
"gists_url": "https://api.github.com/users/Teapack1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Teapack1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Teapack1/subscriptions",
"organizations_url": "https://api.github.com/users/Teapack1/orgs",
"repos_url": "https://api.github.com/users/Teapack1/repos",
"events_url": "https://api.github.com/users/Teapack1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Teapack1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello , I wanted to kindly check in on the status of this PR review. Any feedback or alternative suggestions of the microphone issue are greatly appreciated. Thank you.\n@sanchit-gandhi\n@Narsil",
"@sanchit-gandhi Im using this patch on Windows 10 desktop with external mic and Windows 10 laptop with integrated mic, active mic is always found. Only issue I found is when I connect wireless headset with a foreign name (Sluchátka s mikrofonem) - It can not read the special symbols. It might be fixed with the correct encoding though. Here is code I tested with:\r\n\r\n```\r\nimport sys\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.audio_utils import ffmpeg_microphone_live\r\nimport torch\r\n\r\nmodel_id = \"distil-whisper/distil-medium.en\"\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\npipe = pipeline(\"automatic-speech-recognition\", model=model_id, device=device)\r\nsampling_rate = pipe.feature_extractor.sampling_rate\r\n\r\nchunk_length_s = 2 # how often returns the text\r\nstream_chunk_s = 1 # how often the microphone is checked for new audio\r\nmic = ffmpeg_microphone_live(\r\n sampling_rate=sampling_rate,\r\n chunk_length_s=chunk_length_s,\r\n stream_chunk_s=stream_chunk_s,\r\n)\r\nprint(\"Start talking...\")\r\nfor item in pipe(mic):\r\n sys.stdout.write(\"\\033[K\")\r\n print(item[\"text\"], end=\"\\r\")\r\n if not item[\"partial\"][0]:\r\n print(\"\")\r\n```\r\n",
"Which version of `ffmpeg` are you using ? The command to detect the input mic is non existent on Linux.\r\n\r\nIt's unlikely to be Windows vs Linux issue, but probably a lot more a ffmpeg version. I'm using ffmpeg 6.1 (for the absence of `-list_devices`).",
"Im using build 2023-11-27, that should be also v6.1.\r\n\r\nBut I'm thinking the issue is, that according to this [documentation](https://ffmpeg.org/ffmpeg-devices.html#alsa), the ffmpeg list command is different on Linux. \r\n> To see the list of cards currently recognized by your system check the files /proc/asound/cards and /proc/asound/devices.\r\n\r\nLike this: `cat /proc/asound/cards`\r\n\r\nThe `-list_devices` should work on Windows and Mac. On Mac the command would be: `ffmpeg -f avfoundation -list_devices true -i `.\r\n\r\nI have a Linux Ubuntu 22.* i can inplemement this in future commits, but I do not have Mac.",
"On Linux, this patch is not needed in my experience, only Windows system.",
"Hello all, @Narsil @sanchit-gandhi \r\nAre we going to merge this patch with Transformers? If you thing there is something to add, feel free to point out. \r\n\r\nI've been using this patch for audio live-inference, like ASR with Whisper. Without it can not use mic on Windows.\r\n\r\nI just changed encoding for this fix to accept special symbols, so headsets named with special symbols are read correctly and assigned.\r\n\r\nthank you\r\nKind Regards\r\nOndrej Major",
"@Teapack1 Thanks for looking into this issue, just ran into this problem today when completing the voice assistant project of the HF audio course. I cloned and tried your latest branch of transformers with the ffmpeg updates. For some reason it does not work, however, when I hardcode in the name of the mic as you suggested it works:\r\n\r\ninput_ = \"audio=Microphone Array (Intel® Smart Sound Technology (Intel® SST))\"\r\n\r\nDo you account for the special symbol \"®\"?",
"@mmcgovern574 Thank you for the feedback !\r\n\r\nCan you please expand on the issue you had with the patch? \r\n\r\n1) Did you install the Transformers branch [`'bugfix/ffmpeg_device_windows'`](https://github.com/Teapack1/transformers/tree/bugfix/ffmpeg_device_windows) ? The patch is not implemented in the `'main'` branch of my fork.\r\n\r\n2) Did it return any error messages/logs? Or any other hint?\r\n\r\n3) Did the issue manifested by code passing once without any effect, same as with the original ? \r\n\r\nIf only 3) is correct, it might be the special symbols. I will have to use \"alternative device names\", which are ffmpeg device codes and use it for the `input_` variable. Instead of this `\"Desktop Microphone (RØDE NT-USB+)\"` be using this `\"@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\\wave_{E88C863A-F490-45C4-A1FB-4817532F6DE0}\"`, where the special symbols should not be issue anymore.\r\n\r\n\r\n\r\n",
"@Teapack1\r\n1. I pip installed installed Teapack1/transformers in my conda environment. \r\n2. No errors or message logs\r\n3. Yes, same problem as the original code",
"@mmcgovern574 \r\nIt was 99% problem by my repo not being setup corrrectly. I set the correct branch as default, so now it installs the correct branch by default.\r\n"
] | 1,698 | 1,704 | 1,704 |
CONTRIBUTOR
| null |
# What does this PR do?
According to issue #25183 and personal experience, the `ffmpeg_microphone()` (`ffmpeg_microphone_live()`) function of the `audio_utils.py` pipeline, responsible for streaming raw microphone data, does not run propperly on Windows machines.
It does not crash, the issue manifests by code passing once without any effect, instead of running and processing audio data continuously.
This PR edits `pipelines/audio_utils.py`. Adds a function `_get_microphone_name()` that fetches active microphone device name to the `ffmpeg_microphone()` function, that streams audio data from microphone.
There are no additional dependencies used.
Sytem info:
Desktop Windows 10 Pro N 22H2
VS Code 1.83.1
Peripheral microphone
Review the PR and let me know if there shall be changes in the code or in the approach of fixing the issue, please.
@sanchit-gandhi
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27051/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27051",
"html_url": "https://github.com/huggingface/transformers/pull/27051",
"diff_url": "https://github.com/huggingface/transformers/pull/27051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27051.patch",
"merged_at": 1704717157000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27050/events
|
https://github.com/huggingface/transformers/issues/27050
| 1,960,195,010 |
I_kwDOCUB6oc501jPC
| 27,050 |
Difference in LlamaAttention & LlamaFlashAttention2 attn_output
|
{
"login": "ringohoffman",
"id": 27844407,
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ringohoffman",
"html_url": "https://github.com/ringohoffman",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false |
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey, I think this is related to flash attention version, could you have a look at #26697? ",
"We are currently using `flash-attn==2.3.2`. There was a minor version release of flash attention literally yesterday.\r\n\r\nThe problem persists with `flash-attn==2.3.3`.\r\n\r\nAre you able to reproduce on your end with the supplied script?",
"cc @younesbelkada if you can have a look 😉 ",
"hi @KyleMylonakisProtopia !\r\nI think that difference is expected, I am not sure if flash-attn guarantees full reproducibility for gradient computation, note also that some slight differences in logits are expected between FA-2 and non FA-2 models. ",
"The code demonstrates non-trivial differences in the loss prior to even the first backwards call. Flash attention and flash attention 2 are supposed to be exact algorithms for computing attention. \r\n\r\nFrom the Flash attention 2 paper \"To speed up attention on hardware accelerators such as GPU, [5] proposes an algorithm to reduce the memory\r\nreads/writes while maintaining the same output (without approximation).\" That seems pretty unambiguous to me. \r\n\r\nThe slight differences from whatever parallelization differences are happening should not be manifesting at the third significant digit on the first loss call. This points to some other kind of issue.",
"> Flash attention and flash attention 2 are supposed to be exact algorithms for computing attention.\r\n\r\nyes, but in the script above you are comparing vanilla attention vs FA-2 no?",
"That sentence is referring to Flash attention (and implicitly flash attention 2) to \"vanilla\" attention. That is what our script is showing.",
"ah correct yes you are right, sorry for the confusion, I'll have a deeper look !",
"I also encountered the same problem at inference. Environment: `transformers==4.34.0`, `flash-attn==2.3.3`, `torch==2.0.1+cu117`.\r\n\r\n```python\r\nseed = 42\r\nnp.random.seed(seed)\r\ntorch.manual_seed(seed)\r\ntorch.cuda.manual_seed_all(seed)\r\nprompt = \"\"\"<s>[INST]Tell me the story about a dog.[/INST]\"\"\"\r\nd_model = \"/path/to/CodeLlama-13b-Instruct-hf\"\r\ntokenizer = CodeLlamaTokenizer.from_pretrained(d_model)\r\nmodel = LlamaForCausalLM.from_pretrained(d_model, device_map=\"auto\", torch_dtype=torch.bfloat16)\r\ntokenized = tokenizer(prompt, return_tensors=\"pt\", truncation=False).to(\"cuda\")\r\ngenerated_ids = model.generate(**tokenized, max_new_tokens=1024, do_sample=True, streamer=TextStreamer(tokenizer, skip_prompt=True))\r\n```\r\n\r\nuse-flash-attention-2=False:\r\n\r\nOnce upon a time, there was a dog named Max. Max was a lovable golden retriever who loved nothing more than to go for walks with his owner, Sarah. One day, while they were out on **a walk**,\r\n\r\nuse-flash-attention-2=True:\r\n\r\nOnce upon a time, there was a dog named Max. Max was a lovable golden retriever who loved nothing more than to go for walks with his owner, Sarah. One day, while they were out on **their usual stroll**,",
"Here is my minimal reproducible script:\r\n```python\r\nimport os\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom transformers.models.llama.configuration_llama import LlamaConfig\r\nfrom transformers.models.llama.modeling_llama import LlamaAttention, LlamaModel, _make_causal_mask\r\n\r\ndevice = torch.device(\"cuda\")\r\ndtype = torch.float16\r\n\r\nconfig_ori = LlamaConfig(\r\n hidden_size=1024,\r\n intermediate_size=128,\r\n num_hidden_layers=1,\r\n num_attention_heads=8,\r\n max_position_embeddings=16,\r\n _flash_attn_2_enabled=False\r\n)\r\n\r\nconfig_new = LlamaConfig(\r\n hidden_size=1024,\r\n intermediate_size=128,\r\n num_hidden_layers=1,\r\n num_attention_heads=8,\r\n max_position_embeddings=16,\r\n _flash_attn_2_enabled=True\r\n)\r\n\r\nmodel_ori = LlamaModel(config_ori)\r\nmodel_new = LlamaModel(config_new)\r\n\r\nmodel_new.load_state_dict(model_ori.state_dict())\r\n\r\nmodel_ori.to(dtype).to(device)\r\nmodel_new.to(dtype).to(device)\r\n\r\nattn_ori = model_ori.layers[0].self_attn\r\nattn_new = model_new.layers[0].self_attn\r\n\r\nbsz, hs, seqlen = 2, config_ori.hidden_size, 4\r\ninputs_embeds = torch.randn((bsz, seqlen, hs), dtype=dtype, device=device)\r\n\r\npadding_mask = torch.full((bsz, seqlen), 1, dtype=torch.long, device=device)\r\n# or pad a part\r\n# padding_mask[0, 2:] = 0\r\n\r\nout_ori = model_ori(attention_mask=padding_mask, inputs_embeds=inputs_embeds, use_cache=False)['last_hidden_state']\r\nout_new = model_new(attention_mask=padding_mask, inputs_embeds=inputs_embeds, use_cache=False)['last_hidden_state']\r\n\r\nout_ori.sum(), out_new.sum(), (out_ori - out_new).mean().item(), (out_ori - out_new).abs().max().item(), (out_ori - out_new).abs().mean().item()\r\n```\r\nI noticed that the numerical difference mainly comes from the padding_mask. If the padding_mask is None, it means we only use the causal mask, and the difference is small. However, if we set the padding_mask, we cannot ignore the difference.\r\n\r\n\r\nIf we run pytest from the offical flash-attn repo, the diff.abs().max().item() is always small:\r\n\r\n\r\nThe diff comes from the attention module. A more fine-grained code:\r\n```python\r\nbsz, hs, seqlen = 2, config_ori.hidden_size, 4\r\nhidden = torch.rand((bsz, seqlen, hs), dtype=dtype, device=device)\r\n\r\npadding_mask = torch.full((bsz, seqlen), 1, dtype=torch.long, device=device)\r\n# padding_mask[0, 2:] = 0\r\n\r\npast_key_values_length = 0\r\nkey_value_length = seqlen + past_key_values_length\r\n\r\nposition_ids = torch.arange(past_key_values_length, key_value_length, dtype=torch.long, device=device)\r\nposition_ids = position_ids.unsqueeze(0)\r\n\r\nif padding_mask is not None:\r\n attention_mask_ori = model_ori.attn_mask_converter.to_4d(\r\n padding_mask, seqlen, key_value_length, dtype=hidden.dtype\r\n )\r\nelse:\r\n attention_mask_ori = model_ori.attn_mask_converter.to_causal_4d(\r\n bsz, seqlen, key_value_length, dtype=hidden.dtype, device=hidden.device\r\n )\r\n\r\nout_ori, _, _ = attn_ori.forward(\r\n hidden, attention_mask=attention_mask_ori, position_ids=position_ids, \r\n)\r\n\r\nout_new, _, _ = attn_new.forward(\r\n hidden, attention_mask=padding_mask, position_ids=position_ids\r\n)\r\n\r\nout_ori.sum(), out_new.sum(), (out_ori - out_new).mean().item(), (out_ori - out_new).abs().max().item(), (out_ori - out_new).abs().mean().item()\r\n```\r\n\r\nUPDATE: It seems the diff lies in the padded part in the final attn weights? So maybe this should not affect the final training loss and the inference results?\r\n\r\nmy env:\r\n- `transformers` version: 4.35.0.dev0 (from commit aa4198a at 2023.10.27 main branch)\r\n- Platform: Linux-4.14.0_1-0-0-43-x86_64-with-glibc2.27\r\n- Python version: 3.9.17\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.2\r\n- Accelerate version: 0.22.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.13.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nhope this helps!",
"Thanks for the deep dive @wizyoung! This thread already shows differences in the loss and the inference results, so something is afoot. ",
"cc @younesbelkada If I remember correctly when we debugged the flash attention tests, we found out that the attention mask was not properly taken into account and the attention weights for pad tokens was non zero in vanilla and zero for flash attention. This came from the way we create our attention mask, which adds two inf values, creating overflows. We should be able to easily fix! cc @patrickvonplaten as we talked about this ",
"> cc @younesbelkada If I remember correctly when we debugged the flash attention tests, we found out that the attention mask was not properly taken into account and the attention weights for pad tokens was non zero in vanilla and zero for flash attention. This came from the way we create our attention mask, which adds two inf values, creating overflows. We should be able to easily fix! cc @patrickvonplaten as we talked about this\r\n\r\nI think maybe this is not the actual cause. As two inf values will not cause much numerical difference after softmax. After applying your fix above, the output of the padded part still differs.\r\n\r\nThe results indicate that the padding mask does not take effect in computing attention weights.\r\n\r\nThe problem should come from the pad_input after computing flash attn results. \r\n\r\nUpdate: I ran a quick test on my work projects. In the baseline scenario, I trained and tested everything without using flash attention. For Experiment 1 (Exp1), I trained and tested while using flash attention. The evaluation process involved periodically switching to the test dataset, enabling `use_cache=True`, and performing batch inference. I noticed that the evaluation metrics in Exp1 were around 20% lower compared to the baseline. However, when I loaded the checkpoint from Exp1 without flash attention, the results were nearly identical to the baseline. This outcome matches my expectations because the discrepancies are mainly caused by padding, which is disregarded during the loss backward process and does not affect convergence. Nevertheless, I'm puzzled about why this would impact inference, as I believe that once the EOS token is predicted in the generation process, the process should be finished.",
"Thanks a lot @wizyoung for the deep dive! \r\n\r\n@ArthurZucker indeed we noticed some discrepencies with respect to padd tokens and I think at that time our conclusion was that\r\n\r\n>UPDATE: It seems the diff lies in the padded part in the final attn weights? So maybe this should not affect the final training loss and the inference results? \r\n\r\nas stated by @wizyoung ",
"The difference clearly resides on the padding tokens. \r\n\r\nWith FA-2:\r\n\r\n```bash\r\n(Pdb) self.o_proj(attn_output)\r\ntensor([[[ 0.6187, -0.9595, -0.2783, ..., 0.1057, -0.5645, -0.3220],\r\n [ 0.4392, -0.5137, -0.5078, ..., 0.0863, -0.3232, 0.1931],\r\n [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\r\n [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],\r\n\r\n [[ 0.1334, 0.1556, -0.5737, ..., -0.1802, 0.2262, -0.6035],\r\n [-0.2883, -0.1821, -0.5303, ..., 0.2157, 0.0258, -0.0304],\r\n [-0.4187, -0.1300, -0.2747, ..., 0.3828, 0.0053, -0.3252],\r\n [-0.1055, 0.0997, -0.1527, ..., 0.3984, -0.1208, -0.1553]]],\r\n device='cuda:0', dtype=torch.float16, grad_fn=<UnsafeViewBackward0>)\r\n```\r\n\r\nWithout FA-2: \r\n\r\n```bash\r\ntensor([[[ 0.6187, -0.9595, -0.2783, ..., 0.1057, -0.5645, -0.3220],\r\n [ 0.4392, -0.5137, -0.5078, ..., 0.0862, -0.3232, 0.1930],\r\n [ 0.4172, -0.4719, -0.4473, ..., -0.1212, -0.3323, 0.0089],\r\n [ 0.5713, -0.4893, -0.4084, ..., -0.0648, -0.3967, -0.0724]],\r\n\r\n [[ 0.1334, 0.1556, -0.5737, ..., -0.1802, 0.2262, -0.6035],\r\n [-0.2883, -0.1821, -0.5303, ..., 0.2156, 0.0258, -0.0306],\r\n [-0.4187, -0.1299, -0.2747, ..., 0.3828, 0.0053, -0.3252],\r\n [-0.1055, 0.0997, -0.1527, ..., 0.3987, -0.1210, -0.1554]]],\r\n device='cuda:0', dtype=torch.float16, grad_fn=<UnsafeViewBackward0>)\r\n```\r\n\r\nAs you can see, the hidden states that corresponds to the indices of the attention mask:\r\n```bash\r\n(Pdb) attention_mask\r\ntensor([[1, 1, 0, 0],\r\n [1, 1, 1, 1]], device='cuda:0')\r\n```\r\n\r\nI also tried #27114 \r\n\r\nAre zero-ed out for FA2 whereas they're not for non-FA2 models. Will investigate more ",
"Hi everyone\r\nwe had a deeper look with @ArthurZucker and here are our findings:\r\n\r\n1- #27114 fixes another issue we have with all attention modules in transformers when combining attention masks together, leading sometimes to have undesired `inf` values inside these masks. \r\n\r\n2- for resolving the issue mentioned in the snippet of @wizyoung the adding the following inside the attention module: \r\n```diff\r\nattn_output = attn_output.transpose(1, 2).contiguous()\r\nattn_output = attn_output.reshape(bsz, q_len, self.hidden_size)\r\n\r\n+ if attention_mask is not None:\r\n+ sliced_attention_mask = attention_mask[:, 0, -1, :]\r\n+ attention_mask_2d = (1.0 * ~sliced_attention_mask.bool()).to(attn_output.dtype)\r\n+ attn_output = attn_output * attention_mask_2d.unsqueeze(-1)\r\n\r\nif self.config.pretraining_tp > 1:\r\n attn_output = attn_output.split(self.hidden_size // self.config.pretraining_tp, dim=2)\r\n o_proj_slices = self.o_proj.weight.split(self.hidden_size // self.config.pretraining_tp, dim=1)\r\n attn_output = sum([F.linear(attn_output[i], o_proj_slices[i]) for i in range(self.config.pretraining_tp)])\r\nelse:\r\n attn_output = self.o_proj(attn_output)\r\n\r\nif not output_attentions:\r\n attn_weights = None\r\n \r\nreturn attn_output, attn_weights, past_key_value\r\n```\r\nFixes the issue as this snippet correctly zeroes-out all the hidden states that are related to padding tokens. I am not sure this leads to any impact for generation. Given also that the slicing + cast operations can add some considerable overhead in the attention module (as it has to be done for every layer) I am not sure we should upstream these changes in transformers core. \r\n\r\nHowever the issue mentioned by @KyleMylonakisProtopia still persists (I am able to repro even with the fix), which needs further investigation",
"Thanks for the continued look!",
"I think the reason for the discrepancy between FA-2 and non-FA-2 here comes solely from the fact that we're comparing **padded output tensors** and/or included **padded hidden states vectors** in our results. Padded hidden states vectors are useless/moot and should never influence a loss or be compared.\r\n\r\nLet's explain a bit:\r\n\r\n1. **Padded hidden states vectors** are vectors that correspond to a sequence index `i` that is **not** attended to meaning `attention_mask[i]` is 0. This corresponds to the outer-most left tokens here: https://github.com/huggingface/transformers/issues/27050#issue-1960195010 since we use left-padding or all tokens after `2:` here: https://github.com/huggingface/transformers/issues/27050#issuecomment-1782529853 when doing right padding.\r\n\r\n2. One should **never take padded hidden states vectors** into account! One should never never compare padded hidden states vectors to each other because they are moot / useless and should never be used. This means when comparing the loss here: https://github.com/huggingface/transformers/issues/27050#issue-1960195010, one **has** to add -100 to the labels indexes that correspond to padding tokens to make sure they don't influence the loss. See [this](https://github.com/huggingface/transformers/issues/2946) issue as well. Similarly it doesn't make much sense to do this fix here: https://github.com/huggingface/transformers/issues/27050#issuecomment-1795089451 and to compare the outputs of padded tokens because they are useless anyways.\r\n\r\n3. What is going on here?! \r\n\r\nLet's look at a tiny code example that explains the behavior of non-FA2 code.\r\n\r\n```py\r\nfrom transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask\r\n\r\nattention_mask = torch.tensor([[0, 1, 1]]) # left padding\r\n\r\nprint(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0))\r\n```\r\n\r\nWe get \r\n```\r\ntensor([[[[-3.4028e+38, -3.4028e+38, -3.4028e+38],\r\n [-3.4028e+38, 0.0000e+00, -3.4028e+38],\r\n [-3.4028e+38, 0.0000e+00, 0.0000e+00]]]])\r\n```\r\n\r\nas expected. We see the causal mask and in addition we see that the first column has high negative values.\r\n\r\nNow let's run a softmax on the attention output corresonding to `Softmax(QK^T)` assuming that `QK^T` is 1s only.\r\n\r\n```py\r\nprint(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0).softmax(-2)\r\n```\r\n\r\n```\r\ntensor([[[[0.3333, 0.0000, 0.0000],\r\n [0.3333, 0.5000, 0.0000],\r\n [0.3333, 0.5000, 1.0000]]]])\r\n\r\n```\r\nAs we can see we put equal weight on all input tokens for the output of the padded hidden states vector. This means the output of the padded hidden states vector is very much not 0. \r\n\r\nFA-2 on the other hand just doesn't compute these outputs at all or forces them to be 0 which creates the difference.\r\n\r\n\r\n**Summary**\r\n\r\nLong story short, let's make sure to not compare outputs of padded hidden states. These states are moot no matter what and should not be used for anything. \r\n\r\nIt would be great to re-run the little experiment [here](https://github.com/huggingface/transformers/issues/27050#issue-1960195010) but making sure that -100 is provided for padded out tokens.",
"@patrickvonplaten Would you recommend us using Flash Attention 2 then over the default attention until this bug fix lands?",
"I don't think there is a bug at all tbh. Padding tokens are expected to differ between FA2 and vanilla attention.\r\nEven when only comparing non-padding tokens there will be minor differences due to the different CUDA kernels being used (but they should not be as big as shown here: https://github.com/huggingface/transformers/issues/27050#issue-1960195010)\r\n\r\nGenerally, I always recommend using FA2 if you can use it",
"> I don't think there is a bug at all tbh. Padding tokens are expected to differ between FA2 and vanilla attention. Even when only comparing non-padding tokens there will be minor differences due to the different CUDA kernels being used (but they should not be as big as shown here: [#27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issue-1960195010))\r\n> \r\n> Generally, I always recommend using FA2 if you can use it\r\n\r\nI agree. The padded part should not affect the training loss and inference result. In my experiments, training with FA2 but test with vanilla attention does not make any affects at all. But the creepy thing is, training and test with FA2 yields poor results (but the weights is ok if I switch to vanilla attention at test). I see many issues also report the test result discrepancy when using model.generate. Just a guess, maybe we should conduct a more in-depth investigation into the post-process in model.generate?\r\n",
"> I think the reason for the discrepancy between FA-2 and non-FA-2 here comes solely from the fact that we're comparing **padded output tensors** and/or included **padded hidden states vectors** in our results. Padded hidden states vectors are useless/moot and should never influence a loss or be compared.\r\n> \r\n> Let's explain a bit:\r\n> \r\n> 1. **Padded hidden states vectors** are vectors that correspond to a sequence index `i` that is **not** attended to meaning `attention_mask[i]` is 0. This corresponds to the outer-most left tokens here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issue-1960195010) since we use left-padding or all tokens after `2:` here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issuecomment-1782529853) when doing right padding.\r\n> 2. One should **never take padded hidden states vectors** into account! One should never never compare padded hidden states vectors to each other because they are moot / useless and should never be used. This means when comparing the loss here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issue-1960195010), one **has** to add -100 to the labels indexes that correspond to padding tokens to make sure they don't influence the loss. See [this](https://github.com/huggingface/transformers/issues/2946) issue as well. Similarly it doesn't make much sense to do this fix here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issuecomment-1795089451) and to compare the outputs of padded tokens because they are useless anyways.\r\n> 3. What is going on here?!\r\n> \r\n> Let's look at a tiny code example that explains the behavior of non-FA2 code.\r\n> \r\n> ```python\r\n> from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask\r\n> \r\n> attention_mask = torch.tensor([[0, 1, 1]]) # left padding\r\n> \r\n> print(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0))\r\n> ```\r\n> \r\n> We get\r\n> \r\n> ```\r\n> tensor([[[[-3.4028e+38, -3.4028e+38, -3.4028e+38],\r\n> [-3.4028e+38, 0.0000e+00, -3.4028e+38],\r\n> [-3.4028e+38, 0.0000e+00, 0.0000e+00]]]])\r\n> ```\r\n> \r\n> as expected. We see the causal mask and in addition we see that the first column has high negative values.\r\n> \r\n> Now let's run a softmax on the attention output corresonding to `Softmax(QK^T)` assuming that `QK^T` is 1s only.\r\n> \r\n> ```python\r\n> print(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0).softmax(-2)\r\n> ```\r\n> \r\n> ```\r\n> tensor([[[[0.3333, 0.0000, 0.0000],\r\n> [0.3333, 0.5000, 0.0000],\r\n> [0.3333, 0.5000, 1.0000]]]])\r\n> ```\r\n> \r\n> As we can see we put equal weight on all input tokens for the output of the padded hidden states vector. This means the output of the padded hidden states vector is very much not 0.\r\n> \r\n> FA-2 on the other hand just doesn't compute these outputs at all or forces them to be 0 which creates the difference.\r\n> \r\n> **Summary**\r\n> \r\n> Long story short, let's make sure to not compare outputs of padded hidden states. These states are moot no matter what and should not be used for anything.\r\n> \r\n> It would be great to re-run the little experiment [here](https://github.com/huggingface/transformers/issues/27050#issue-1960195010) but making sure that -100 is provided for padded out tokens.\r\n\r\nSo I understand what you are saying and agree, the padded tokens and hidden states should not be used at any point. However I disagree with your conclusion that no bug is necessarily present. \r\n\r\nThe example provided at the top of this thread does not have padding. If padding is being added and being used anywhere, that is happening in the Huggingface code. Moreover, the loss function we are reporting is the loss function by the Huggingface LLama2 model, again not something that we are writing. If there is a mistake in what we are doing, then we should be able to call out a specific line number in the script at the top of the page where a mistake is made, but I am really having a hard time finding one there. Otherwise whatever is causing the discrepancy would be part of either the Huggingface code, or the code distributed by Meta and hosted on Huggingface. ",
"> I think the reason for the discrepancy between FA-2 and non-FA-2 here comes solely from the fact that we're comparing **padded output tensors** and/or included **padded hidden states vectors** in our results. Padded hidden states vectors are useless/moot and should never influence a loss or be compared.\r\n> \r\n> Let's explain a bit:\r\n> \r\n> 1. **Padded hidden states vectors** are vectors that correspond to a sequence index `i` that is **not** attended to meaning `attention_mask[i]` is 0. This corresponds to the outer-most left tokens here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issue-1960195010) since we use left-padding or all tokens after `2:` here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issuecomment-1782529853) when doing right padding.\r\n> 2. One should **never take padded hidden states vectors** into account! One should never never compare padded hidden states vectors to each other because they are moot / useless and should never be used. This means when comparing the loss here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issue-1960195010), one **has** to add -100 to the labels indexes that correspond to padding tokens to make sure they don't influence the loss. See [this](https://github.com/huggingface/transformers/issues/2946) issue as well. Similarly it doesn't make much sense to do this fix here: [Difference in LlamaAttention & LlamaFlashAttention2 attn_output #27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issuecomment-1795089451) and to compare the outputs of padded tokens because they are useless anyways.\r\n> 3. What is going on here?!\r\n> \r\n> Let's look at a tiny code example that explains the behavior of non-FA2 code.\r\n> \r\n> ```python\r\n> from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask\r\n> \r\n> attention_mask = torch.tensor([[0, 1, 1]]) # left padding\r\n> \r\n> print(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0))\r\n> ```\r\n> \r\n> We get\r\n> \r\n> ```\r\n> tensor([[[[-3.4028e+38, -3.4028e+38, -3.4028e+38],\r\n> [-3.4028e+38, 0.0000e+00, -3.4028e+38],\r\n> [-3.4028e+38, 0.0000e+00, 0.0000e+00]]]])\r\n> ```\r\n> \r\n> as expected. We see the causal mask and in addition we see that the first column has high negative values.\r\n> \r\n> Now let's run a softmax on the attention output corresonding to `Softmax(QK^T)` assuming that `QK^T` is 1s only.\r\n> \r\n> ```python\r\n> print(_prepare_4d_causal_attention_mask(attention_mask, (1, 3), attention_mask.float(), 0).softmax(-2)\r\n> ```\r\n> \r\n> ```\r\n> tensor([[[[0.3333, 0.0000, 0.0000],\r\n> [0.3333, 0.5000, 0.0000],\r\n> [0.3333, 0.5000, 1.0000]]]])\r\n> ```\r\n> \r\n> As we can see we put equal weight on all input tokens for the output of the padded hidden states vector. This means the output of the padded hidden states vector is very much not 0.\r\n> \r\n> FA-2 on the other hand just doesn't compute these outputs at all or forces them to be 0 which creates the difference.\r\n> \r\n> **Summary**\r\n> \r\n> Long story short, let's make sure to not compare outputs of padded hidden states. These states are moot no matter what and should not be used for anything.\r\n> \r\n> It would be great to re-run the little experiment [here](https://github.com/huggingface/transformers/issues/27050#issue-1960195010) but making sure that -100 is provided for padded out tokens.\r\n\r\nHi @patrickvonplaten , thanks for the detail explanation :) I agree that the attention output of using FA or not is different. \r\n\r\nHowever, as we know that we are doing a linear projection for the attention output `output = linear_proj(attn_output)`, which is essentially a matmul, `output = matmul(attn_output, weight)`. So the output is indeed affected by the `moot part`.\r\n\r\n",
"cc https://github.com/huggingface/transformers/pull/26421",
"Trying to narrow down the problem:\r\n\r\n\r\nIt would be great to test:\r\na) Training. @KyleMylonakisProtopia To make sure FA2 vs. no-FA2 influences training we need to make sure to add -100 to padded tokens as follows.\r\n\r\n\r\n```diff\r\nimport argparse\r\n\r\nimport torch\r\nimport torch.backends.cudnn\r\nimport transformers\r\nfrom transformers.models import llama\r\n\r\n\r\ndef main() -> None:\r\n torch.backends.cudnn.deterministic = True\r\n\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"--use-flash-attention-2\", action=\"store_true\")\r\n args = parser.parse_args()\r\n use_flash_attention_2 = args.use_flash_attention_2\r\n\r\n tokenizer = transformers.AutoTokenizer.from_pretrained(\r\n \"/models/huggingface/meta-llama/llama-2-7b-chat-hf\", local_files_only=True, use_safetensors=True, device_map=torch.device(\"cuda\")\r\n )\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = \"left\"\r\n\r\n text = \"Hello world!\"\r\n tokenized_text = tokenizer(text)\r\n tokenized_text = {key: torch.tensor(value).unsqueeze(dim=0).to(torch.device(\"cuda\")) for key, value in tokenized_text.items()}\r\n tokenized_text[\"labels\"] = tokenized_text[\"input_ids\"].clone()\r\n+ tokenized_text[\"labels\"] = torch.where(attention_mask == 0, -100, tokenized_text[\"labels\"]) # make sure to not apply loss on padded tokens \r\n\r\n torch.manual_seed(0)\r\n model = llama.LlamaForCausalLM.from_pretrained(\r\n \"/models/huggingface/meta-llama/llama-2-7b-chat-hf\",\r\n local_files_only=True,\r\n use_safetensors=True,\r\n device_map=torch.device(\"cuda\"),\r\n use_flash_attention_2=use_flash_attention_2,\r\n torch_dtype=torch.bfloat16,\r\n )\r\n assert isinstance(model, llama.LlamaForCausalLM)\r\n model.eval()\r\n for param in model.parameters():\r\n param.requires_grad = False\r\n\r\n model.model.layers[0].train()\r\n for param in model.model.layers[0].parameters():\r\n param.requires_grad = True\r\n\r\n optim = torch.optim.AdamW(model.parameters())\r\n\r\n torch.manual_seed(0)\r\n\r\n for i in range(10):\r\n output = model(**tokenized_text)\r\n loss = output[\"loss\"]\r\n if i in (0, 9):\r\n print(loss)\r\n loss.backward()\r\n optim.step()\r\n optim.zero_grad()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIn addition it won't be enough to loss at the loss curve and say there is a bug if they differ. They will surely differ since the backward method of FA2 is very different to no FA2. We need to actually train Llama on a bit of data and see how quickly the models learn depending on whether FA2 or no FA2 is implemented.\r\n\r\nb) It would be great to have a fully reproducible code snippet where we're clearly seeing different results between FA2 and no FA2 for `generate`. This should be easy to do. Just run both models with the same seed and generate and find an example where they significantly differ or where one is clearly better then the other one.\r\n\r\n> However, as we know that we are doing a linear projection for the attention output output = linear_proj(attn_output), which is essentially a matmul, output = matmul(attn_output, weight). So the output is indeed affected by the moot part.\r\n\r\nThis is not really correct because `linear_proj` is \"seq-len\"-independent. Image that the Softmax(QK^T) output is as follows:\r\n```\r\nattn_output = [vec1, vec2, vec3, pad_vec, pad_vec, vec4]\r\n```\r\n\r\nNow doing:\r\n```\r\noutput = matmul([vec1, vec2, vec3, pad_vec, pad_vec, vec4], weight)\r\n```\r\n\r\nwill give you again:\r\n```\r\n[new_vec1, new_vec2, new_vec3, new_pad_vec, new_pad_vec, new_vec4]\r\n```\r\n\r\nwhereby **importantly** `new_pad_vec` did **not** influence the computation of `new_vec1` at all. The linear projection is only applied over the feature dimension not the seq dimensions, hence you could also do the following and get the same results:\r\n\r\n```\r\nfor i in range(6)\r\n new_vec_{i} = matmul(vec_{i}, weight)\r\n```\r\n\r\nTo make some progress here it would be really really great if someone could provide a reproducible code snippet of either a) or b) ",
"This example of cckao is a great example of b): https://github.com/huggingface/transformers/issues/27050#issuecomment-1780424209\r\n\r\nBut I don't think it's a bug and simply due to different CUDA kernels being used. Note how similar the generations are. You would find similar differences just by running the same algorithm on a different hardware. If there would be a bug with the attention mask, the differences would be much starker.",
"Just ran with the additional line of code you suggested and unfortunately there was no change in the behavior. The discrepancy remains exactly as it was. \r\n\r\nYou mean the implementation of the backwards for FA2 is very different to the implementation of the method without FA2. The implementations with and without FA2 are both exact in the sense they are not performing any numerical approximations of the derivative. The sources of error would be truncation error and the non-associativity and commutativity of floating point numbers. Now it could be that that very small error accumulates rapidly due to lack of stability. If that were the case the decreasing the lr, say to 5e-5, and running out to 50 iterations should diminish the discrepancy. However when I do that I see even starker differences at iteration 50.\r\n\r\n```\r\npython flash_attn_non_determinism.py\r\ntensor(5.6589, device='cuda:0', grad_fn=<NllLossBackward0>)\r\ntensor(2.5236, device='cuda:0', grad_fn=<NllLossBackward0>)\r\n\r\npython flash_attn_non_determinism.py --use-flash-attention-2\r\ntensor(5.6612, device='cuda:0', grad_fn=<NllLossBackward0>)\r\ntensor(0.4144, device='cuda:0', grad_fn=<NllLossBackward0>)\r\n```\r\n\r\nWe know they modes are training differently with and without FA2 already because that's why we made this ticket in the first place: we were not able to reproduce the same results that we had previously established without FA2 after enabling it. ",
"Thanks for re-running the training script @KyleMylonakisProtopia ! \r\nAnd in your training experiments, using FA2 doesn't give sensible results where as not using FA2 for training does? \r\nAlso, it seems like both work correctly in inference no? \r\n\r\n\r\n=> So could it be that there is then a bug with FA2 for training only? ",
"Have we looked at how the gradients of attention vs. flash attention 2 are backpropagating? ",
"> This example of cckao is a great example of b): [#27050 (comment)](https://github.com/huggingface/transformers/issues/27050#issuecomment-1780424209)\r\n> \r\n> But I don't think it's a bug and simply due to different CUDA kernels being used. Note how similar the generations are. You would find similar differences just by running the same algorithm on a different hardware. If there would be a bug with the attention mask, the differences would be much starker.\r\n\r\nThe difference actually becomes quite sensible after the first different token due to the nature of autoregressive models. If the difference is due to different CUDA kernels and we cannot fix it, that really limited the application of FA2 to pretrained models."
] | 1,698 | 1,704 | null |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We notice `LlamaFlashAttention2._flash_attention_forward` returns a different `attn_output` than `LlamaAttention` computes.
`flash_attn_non_determinism.py`:
```python
import argparse
import torch
import torch.backends.cudnn
import transformers
from transformers.models import llama
def main() -> None:
torch.backends.cudnn.deterministic = True
parser = argparse.ArgumentParser()
parser.add_argument("--use-flash-attention-2", action="store_true")
args = parser.parse_args()
use_flash_attention_2 = args.use_flash_attention_2
tokenizer = transformers.AutoTokenizer.from_pretrained(
"/models/huggingface/meta-llama/llama-2-7b-chat-hf", local_files_only=True, use_safetensors=True, device_map=torch.device("cuda")
)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
text = "Hello world!"
tokenized_text = tokenizer(text)
tokenized_text = {key: torch.tensor(value).unsqueeze(dim=0).to(torch.device("cuda")) for key, value in tokenized_text.items()}
tokenized_text["labels"] = tokenized_text["input_ids"].clone()
torch.manual_seed(0)
model = llama.LlamaForCausalLM.from_pretrained(
"/models/huggingface/meta-llama/llama-2-7b-chat-hf",
local_files_only=True,
use_safetensors=True,
device_map=torch.device("cuda"),
use_flash_attention_2=use_flash_attention_2,
torch_dtype=torch.bfloat16,
)
assert isinstance(model, llama.LlamaForCausalLM)
model.eval()
for param in model.parameters():
param.requires_grad = False
model.model.layers[0].train()
for param in model.model.layers[0].parameters():
param.requires_grad = True
optim = torch.optim.AdamW(model.parameters())
torch.manual_seed(0)
for i in range(10):
output = model(**tokenized_text)
loss = output["loss"]
if i in (0, 9):
print(loss)
loss.backward()
optim.step()
optim.zero_grad()
if __name__ == "__main__":
main()
```
```console
$ python flash_attn_non_determinism.py --use-flash-attention-2
tensor(5.6612, device='cuda:0', grad_fn=<NllLossBackward0>)
tensor(0.3542, device='cuda:0', grad_fn=<NllLossBackward0>)
$ python flash_attn_non_determinism.py
tensor(5.6589, device='cuda:0', grad_fn=<NllLossBackward0>)
tensor(0.2275, device='cuda:0', grad_fn=<NllLossBackward0>)
```
### Expected behavior
I am not expecting the magnitude of the difference between the 2 implementations. A difference of `0.1267` compared to `0.3542` seems very large.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27050/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27050/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27049/events
|
https://github.com/huggingface/transformers/issues/27049
| 1,960,125,156 |
I_kwDOCUB6oc501SLk
| 27,049 |
deprecation warning from transformers itself
|
{
"login": "pseudotensor",
"id": 2249614,
"node_id": "MDQ6VXNlcjIyNDk2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pseudotensor",
"html_url": "https://github.com/pseudotensor",
"followers_url": "https://api.github.com/users/pseudotensor/followers",
"following_url": "https://api.github.com/users/pseudotensor/following{/other_user}",
"gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions",
"organizations_url": "https://api.github.com/users/pseudotensor/orgs",
"repos_url": "https://api.github.com/users/pseudotensor/repos",
"events_url": "https://api.github.com/users/pseudotensor/events{/privacy}",
"received_events_url": "https://api.github.com/users/pseudotensor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Yep, a CTRL + F in transformers indicates that this is also the case for some peft calls here and there. I'll open a pr for a fix, thanks for reporting!"
] | 1,698 | 1,699 | 1,699 |
NONE
| null |
### System Info
https://github.com/huggingface/transformers/blob/6cbc1369a330860c128a1ba365f246751382c9e5/src/transformers/generation/configuration_utils.py#L717
This is using use_auth_token even though it's deprecated, so always showing warning despite no user mishandling.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
should be obvious from code line shared
### Expected behavior
No deprecation warning
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27049/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/27048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27048/events
|
https://github.com/huggingface/transformers/pull/27048
| 1,959,947,146 |
PR_kwDOCUB6oc5drV6Q
| 27,048 |
[DOCS] minor fixes in README.md
|
{
"login": "Akash190104",
"id": 112017800,
"node_id": "U_kgDOBq1BiA",
"avatar_url": "https://avatars.githubusercontent.com/u/112017800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Akash190104",
"html_url": "https://github.com/Akash190104",
"followers_url": "https://api.github.com/users/Akash190104/followers",
"following_url": "https://api.github.com/users/Akash190104/following{/other_user}",
"gists_url": "https://api.github.com/users/Akash190104/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Akash190104/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akash190104/subscriptions",
"organizations_url": "https://api.github.com/users/Akash190104/orgs",
"repos_url": "https://api.github.com/users/Akash190104/repos",
"events_url": "https://api.github.com/users/Akash190104/events{/privacy}",
"received_events_url": "https://api.github.com/users/Akash190104/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27048). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
minor fixes
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27048/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27048",
"html_url": "https://github.com/huggingface/transformers/pull/27048",
"diff_url": "https://github.com/huggingface/transformers/pull/27048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27048.patch",
"merged_at": 1698254473000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27047/events
|
https://github.com/huggingface/transformers/pull/27047
| 1,959,676,474 |
PR_kwDOCUB6oc5dqcLW
| 27,047 |
Correct docstrings and a typo in comments
|
{
"login": "lewis-yeung",
"id": 83903009,
"node_id": "MDQ6VXNlcjgzOTAzMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/83903009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewis-yeung",
"html_url": "https://github.com/lewis-yeung",
"followers_url": "https://api.github.com/users/lewis-yeung/followers",
"following_url": "https://api.github.com/users/lewis-yeung/following{/other_user}",
"gists_url": "https://api.github.com/users/lewis-yeung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewis-yeung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewis-yeung/subscriptions",
"organizations_url": "https://api.github.com/users/lewis-yeung/orgs",
"repos_url": "https://api.github.com/users/lewis-yeung/repos",
"events_url": "https://api.github.com/users/lewis-yeung/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewis-yeung/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27047). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
## What does this PR do?
Correct docstrings of the following methods:
- `TrainingArguments.set_save`
- `TrainingArguments.set_logging`
## Who can review?
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27047/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27047",
"html_url": "https://github.com/huggingface/transformers/pull/27047",
"diff_url": "https://github.com/huggingface/transformers/pull/27047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27047.patch",
"merged_at": 1698335177000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27046/events
|
https://github.com/huggingface/transformers/pull/27046
| 1,959,648,086 |
PR_kwDOCUB6oc5dqWDt
| 27,046 |
translate transformers_agents.md to Chinese
|
{
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @stevhliu\r\n\r\nI think it's a very interesting doc, although a little hard to translater as it's more like a introductory article than a technical one. 😀\r\n\r\nAnd I think if more docs are translated, it may attract more people to do some contribution. So just do it !\r\n\r\nBesides, I find it's hard to make all subfolders update existed files or non-existed files in time when files in en-folder make some change. Maybe you may have some idea or suggestion. You can just comment this issue https://github.com/huggingface/transformers/issues/26803 or just make a new issue ! But it's still a work that will need to be taken care of much later. Currently, just do the translation.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27046). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu \r\n\r\nthanks for your review. i just update the file. And for linkS, I just redirect them to files in en folder. and for title, I think it's better to keep transformer_agent untranslated while I just append '教程' to make it more clear. \r\n\r\nBesides, to make it more like a article, what I can do now is just try to translate it more clear rather using normal technical way, which just list step1, step2...\r\n\r\nBest",
"> @stevhliu\r\n> \r\n> thanks for your review. i just update the file. And for linkS, I just redirect them to files in en folder. and for title, I think it's better to keep transformer_agent untranslated while I just append '教程' to make it more clear.\r\n> \r\n> Besides, to make it more like a article, what I can do now is just try to translate it more clear rather using normal technical way, which just list step1, step2...\r\n> \r\n> Best\r\n\r\nI think the original English title for this tutorial is \"Agents\" rather than \"transformer_agent\"\r\n\r\n\r\n"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27046/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27046/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27046",
"html_url": "https://github.com/huggingface/transformers/pull/27046",
"diff_url": "https://github.com/huggingface/transformers/pull/27046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27046.patch",
"merged_at": 1698435944000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27045/events
|
https://github.com/huggingface/transformers/pull/27045
| 1,959,599,536 |
PR_kwDOCUB6oc5dqLZ6
| 27,045 |
[`core` / `Quantization` ] AWQ integration
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is super exciting to see! The original repository does not support TheBloke’s quants, they were made with AutoAWQ - perhaps an argument to route to AutoAWQ for compatibility. ",
"Thanks @amyeroberts for your review! I have replied to your questions above, I believe that all the points are already being addressed. Let me know if I missed anything",
"Thanks for all your reviews @amyeroberts @ArthurZucker @SunMarc @casper-hansen !"
] | 1,698 | 1,698 | 1,698 |
CONTRIBUTOR
| null |
# What does this PR do?
As per title, this PR adds the AWQ inference support in transformers.
- Original paper: https://arxiv.org/abs/2306.00978
- Original LLM-AWQ repository: https://github.com/mit-han-lab/llm-awq/
- Auto-AWQ repository: https://github.com/casper-hansen/AutoAWQ

AWQ is a new and popular quantization scheme, already used in various libraries such as TGI, vllm, etc. and known to be faster than GPTQ models according to some benchmarks
Contrary to GPTQ, in this integration we want to support **inference only** - since the ecosystem is quite mature with respect to quantizing a model, we will publicize different routes for that purpose, such as using `auto-awq`, the original repository or optimum Neural Compressor.
For now I have pushed a 'test' model under this repository: https://huggingface.co/ybelkada/test-mistral-7b-v0.1-awq but we plan to support all AWQ weights from TheBloke. For running experiments using this PR, you can first `pip install autoawq` then run:
<details>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ybelkada/test-mistral-7b-v0.1-awq"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True).to(0)
print(model)
text = ["Hello my name is", "hi"]
input_ids = tok.encode(text, return_tensors="pt").to(0)
output = model.generate(input_ids, max_new_tokens=40)
print(tok.batch_decode(output, skip_special_tokens=True))
```
</details>
## TODO:
- [x] Benchmarks
- [x] Documentation
- [ ] Support fused modules
- [x] Colab inference demo
- [x] Write tests
- [ ] Support weights that have been quantized with optimum NC
- [x] Support weights that have been quantized with llm-awq
cc @fxmarty @SunMarc @casper-hansen @TheBloke @IlyasMoutawwakil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27045/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27045/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27045",
"html_url": "https://github.com/huggingface/transformers/pull/27045",
"diff_url": "https://github.com/huggingface/transformers/pull/27045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27045.patch",
"merged_at": 1698825992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/27044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27044/events
|
https://github.com/huggingface/transformers/issues/27044
| 1,959,555,993 |
I_kwDOCUB6oc50zHOZ
| 27,044 |
Pruning/Compressing heads in attention blocks
|
{
"login": "NamburiSrinath",
"id": 40389487,
"node_id": "MDQ6VXNlcjQwMzg5NDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/40389487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NamburiSrinath",
"html_url": "https://github.com/NamburiSrinath",
"followers_url": "https://api.github.com/users/NamburiSrinath/followers",
"following_url": "https://api.github.com/users/NamburiSrinath/following{/other_user}",
"gists_url": "https://api.github.com/users/NamburiSrinath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NamburiSrinath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NamburiSrinath/subscriptions",
"organizations_url": "https://api.github.com/users/NamburiSrinath/orgs",
"repos_url": "https://api.github.com/users/NamburiSrinath/repos",
"events_url": "https://api.github.com/users/NamburiSrinath/events{/privacy}",
"received_events_url": "https://api.github.com/users/NamburiSrinath/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false | null |
[] |
[
"Hey! Would recommend you to have a look at the source code [here](https://github.com/ArthurZucker/transformers/blob/536e339b74b494615086b4286a3d14b98b42ac21/src/transformers/models/bert/modeling_bert.py#L399). The pruning logic is already implemented, and looking at the operations performed in the attention layers should help you clarify where the heads are, where the reshaping happens etc 😉 ",
"Thanks @ArthurZucker, it was super helpful and I was successfully able to do it for BERT :)\r\n\r\nBut when I am trying to do it for Flan-T5, it threw the following error\r\n\r\n```\r\nAttributeError: 'T5ForConditionalGeneration' object has no attribute \r\n'_prune_heads' \r\n```\r\n\r\nBut when I did `help(model)`, FlanT5 base class does have `prune_heads` function.\r\n\r\nCan you verify if it's a bug in HF? Because when the `prune_heads` works for BERT, I expected it to work for other class of models as well. \r\n\r\nHere's the code (ignore the print statements, it works when the model is _bert-base-uncased_):\r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-small\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/flan-t5-small\")\r\nprint(model)\r\nprint(\"-\"*50)\r\nprint(help(model))\r\nprint(\"-\"*50)\r\nmodel.prune_heads({0: [1, 2]})\r\nprint(model)\r\n```",
"In retrospection, I am not sure how it can be done. Because for _encoder-only_ models, having a prune dictionary `{0:[1, 2]}` makes sense, \"_pick the first 2 heads from 1st layer_\"\r\n\r\nBut in Encoder-decoder models, we will have 2 types of attentions:\r\n1. Encoder self-attention\r\n2. Decoder self-attention \r\n3. Decoder cross-attention\r\n\r\nSo, this numbering system of a dictionary of lists might be difficult! \r\n\r\nCorrect if my understanding is wrong!",
"No prune heads is not supported for all models 😉 ",
"Thanks for confirming @ArthurZucker, I would like to prune particular heads in Encoder-Decoder models -- Example applications include papers from academia such as - [Paper1](https://lena-voita.github.io/posts/acl19_heads.html), [Paper2](https://github.com/pmichel31415/are-16-heads-really-better-than-1)\r\n\r\nCan you suggest a way to do so? For example, a boiler-plate code which does this as I am unable to wrap around my head completely --\r\n\r\nHere's my thought\r\n\r\n```\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)\r\nlinear_layers_list = []\r\nmodel_layers_list = [f'encoder.block.{i}.layer.0.SelfAttention.q', 'encoder.block.{i}.layer.0.SelfAttention.k', 'encoder.block.{i}.layer.0.SelfAttention.v']\r\n\r\nfor name, layer in model.named_modules():\r\n if name in model_layers_list:\r\n linear_layers_list.append(layer)\r\nlayer = linear_layers_list[layer_index]\r\n\r\nif prune_type == 'ln_structured':\r\n # Ln structured with n=1 i.e L1 pruning\r\n prune.ln_structured(layer, name='weight', amount=prune_percentage, dim=1, n=n)\r\n```\r\n\r\nIn the above code, I prune the particular weight matrix (q, k, v) in _block 'i'_ at _output_dim_ (dim=1) with _prune_percentage_\r\n\r\nNow, the changes that might be needed are:\r\n\r\n1. Instead of _l1-structured_ with _prune_percentage_, prune the head (maybe have _head_index_ and make all the values in that 0)\r\n2. If removing via _prune_heads_ logic as in Encoder-models, we might need to reshape. Or we can have a _weight_orig_ ([Pytorch link](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html#remove-pruning-re-parametrization)), which has 0s corresponding to head positions that needs to be pruned and we can simply multiply internally.\r\n\r\nAny help would be greatly appreciated! If you feel this adds value in HF, I can try contributing to a PR\r\n\r\nThanks a ton :)",
"Hi @ArthurZucker, \r\n\r\nIt's been a while and I thought to follow-up. The below code tells that `prune_heads` is supported by T5 Class. If so, I am getting an error as mentioned in above thread.\r\n \r\n```\r\nAttributeError: 'T5ForConditionalGeneration' object has no attribute \r\n'_prune_heads' \r\n```\r\nCode snippet:\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-small\")\r\nprint(help(model))\r\n```\r\nOutput logs:\r\n```\r\nclass T5ForConditionalGeneration(T5PreTrainedModel)\r\n...\r\nprune_heads(self, heads_to_prune: Dict[int, List[int]])\r\n | Prunes heads of the base model.\r\n | \r\n | Arguments:\r\n | heads_to_prune (`Dict[int, List[int]]`):\r\n | Dictionary with keys being selected layer indices (`int`) and associated values being the list of heads\r\n | to prune in said layer (list of `int`). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on\r\n | layer 1 and heads 2 and 3 on layer 2.\r\n```\r\n\r\nIf it's not supported, then:\r\n\r\n1. Please update the documentation reflecting the same\r\n2. Help me if my understanding on removing heads is correct: Basically I would like to know if the heads internally are like this - [(768, 64), (768, 64) ... 12 times] -> (768, 768) i.e the heads are concatenated at dim=1 so, we need to create mask at dim=1 and remove the connections to prune a particular head. (Eg numbers from BERT-base but holds true for any Attention module)\r\n\r\nDo correct if my understanding is wrong\r\nThanks a ton in advance!\r\n\r\n\r\nSame issue appx a year ago which became stale - [https://github.com/huggingface/transformers/issues/19625](https://github.com/huggingface/transformers/issues/19625)\r\n\r\nIssue raised in Huggingface forms - [https://discuss.huggingface.co/t/t5forconditionalgeneration-object-has-no-attribute-prune-heads/16003](https://discuss.huggingface.co/t/t5forconditionalgeneration-object-has-no-attribute-prune-heads/16003)",
"Hi @ArthurZucker,\r\n\r\nI worked on this and would like to know if the following code is logically correct:\r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-small\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/flan-t5-small\") \r\nnum_heads = 8\r\n\r\n# A function which zeroes out particular columns depending on head_index!\r\ndef zero_out_columns(layer, head_index):\r\n in_features = layer.in_features\r\n out_features = layer.out_features\r\n start_col = head_index * (out_features // num_heads)\r\n end_col = (head_index + 1) * (out_features // num_heads)\r\n \r\n # Zero out the specified columns\r\n layer.weight.data[:, start_col:end_col] = 0\r\n\r\n# Select the first Attention block in encoder\r\nqkv_layers_list = ['encoder.block.0.layer.0.SelfAttention.q', 'encoder.block.0.layer.0.SelfAttention.k',\r\n 'encoder.block.0.layer.0.SelfAttention.v']\r\n\r\n# Prune the 1st head\r\nfor name, layer in model.named_modules():\r\n if name in qkv_layers_list:\r\n zero_out_columns(layer, 0)\r\n```\r\n\r\nThe above code is expected to _prune the first head from first attention block_ of FlanT5. If that's not correct, please help me to understand it in a better way!\r\n\r\n**P.S:** No of heads (from T5 paper which I believe is the case for FlanT5 as well):\r\n```\r\nSmall - 8\r\nBase - 12\r\nlarge - 16\r\nXL - 32\r\nXXL - 128 \r\n```\r\n\r\nIf so, I would also appreciate adding the number of heads information to documentation (or maybe I am unable to figure out!)\r\n\r\nWould be happy to open a PR if this is useful :)",
"Hey @NamburiSrinath, thanks for your interest in the `transformers` codebase and for your questions. I understand that you might be working on something urgent, but please refrain from pinging as many different persons, I'm here interacting with you, and I'll try to get you through this. \r\n\r\n1. T5 seems to support head pruning, with the `T5Model` but not the `T5ForConditionalGeneration`. That is not necessarly expected and I think we should support it.\r\n2. Regarding your understanding of how the head works, how they are supposed to be pruned etc: I can serve you an answer of course, but I'll try to give you some hints first: \r\n - [Here](https://github.com/huggingface/transformers/blob/06146e312a5e2d588fcb10810c9c06410d1fa078/src/transformers/pytorch_utils.py#L53) you have an example of how we prune heads. As you can see, the new layer is a bit different from the old one. Based on this, I would say that zero-ing out the weights as you are doing might not be optimal. It's also not applicable for layers that have a bias. Other than that looks alright to me. \r\n ",
"Thanks for the response @ArthurZucker, sorry about pinging multiple people, will refrain doing that :)\r\n\r\n1. If you can help me, I can open a PR as you also believe it needs to be supported\r\n2. Zeroing out incase bias is there can be done simply by mapping the start and end indices! But as you said, I would like to know how it's not optimal. I am making changes in-place, and I think that's not ideal for a PR. I can understand that!\r\n\r\nIf my understanding is correct, the code you shared essentially creates a copy of the W, b and inserts those values to a new Linear layer module respecting the dimension of the prune arguments given i.e we won't select the values corresponding to the prune indices while placing it to the new module!\r\n\r\nIf my understanding is correct, I am still having trouble understanding on integrating this to `transformers/modeling_utils.py` ",
"Hi @ArthurZucker,\r\n\r\nGently pinging for a follow-up.\r\n\r\nIf the solution I provided is not correct -- Suggest me how the data structure looks like!\r\nIf the solution I provided is correct but not optimal -- That's great, it resolves my task. But I would like to hear your thoughts on how to make it optimal and do a PR given there're lots of previous issues which became stale :( ",
"Hey, #20106 gives you an idea of how it should be implemented. Feel free to take it over 😉 \r\nA lot of other models integrate this in the attention layer in a similar way, I recommend you to check them out! \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,702 | null |
NONE
| null |
### Feature request
I've a conceptual question
BERT-base has a dimension of 768 for query, key and value and 12 heads (Hidden dimension=768, number of heads=12). The same is conveyed if we see the BERT-base architecture
```
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
```
Now, my question is:
Can I consider the first 64 neurons from the _out_features_ as the first-head, the next 64 neurons from the _out_features_ as the 2nd head and so on? (sec 3.2.2 from original paper; [Link](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf))
Basically, I am wondering if the Linear module representing query matrix; which is 768x768 can be thought as (768x64), (768x64)...12 times?
If so, is it possible to provide some starter code as I am unable to wrap around my head. Any help is appreciated (and I've some sample in the contribution section)
**P.S:** Here's the issue from StackOverflow ([link](https://datascience.stackexchange.com/questions/124233/understanding-multi-headed-attention-from-architecture-details))
### Motivation
Example applications include papers from academia such as - [Paper1](https://lena-voita.github.io/posts/acl19_heads.html), [Paper2](https://github.com/pmichel31415/are-16-heads-really-better-than-1)
I referred to some of the previous posts ([link](https://datascience.stackexchange.com/questions/88330/how-do-the-linear-layers-in-the-attention-mechanism-work)), but I would appreciate any validation on this thought-process as it's similar but not same.
### Your contribution
Here's a code which prunes a particular % in particular layer depending on _layer_index_ and _prune_percentage_
```
model = AutoModelForMaskedLM.from_pretrained(checkpoint)
linear_layers_list = []
for name, layer in model.named_modules():
if name in model_layers_list:
linear_layers_list.append(layer)
print(f"No of linear layers are: {len(linear_layers_list)}")
layer = linear_layers_list[layer_index]
if prune_type == 'ln_structured':
# Ln structured with n=1 i.e L1 pruning
prune.ln_structured(layer, name='weight', amount=prune_percentage, dim=0, n=n)
```
I can understand that I can basically pass the Linear module and prune x% of weights.
Now, I would like to prune/remove one head in a similar fashion.
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27044/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/27043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/27043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/27043/events
|
https://github.com/huggingface/transformers/pull/27043
| 1,959,546,473 |
PR_kwDOCUB6oc5dp_9Z
| 27,043 |
Fix config silent copy in from_pretrained
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks, was there an issue related to this somewhere? 🤗\r\n\r\nYeah in bart flash attention"
] | 1,698 | 1,698 | 1,698 |
MEMBER
| null |
# What does this PR do?
Some model's copy the config in the __init__ . We need to make sure that after we init the model we're still using the correct config.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/27043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/27043/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27043",
"html_url": "https://github.com/huggingface/transformers/pull/27043",
"diff_url": "https://github.com/huggingface/transformers/pull/27043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27043.patch",
"merged_at": 1698167138000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.