pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 40k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 40k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_40k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_40k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_40k"]}
|
google/multiberts-seed_4-step_40k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_40k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 40k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 40k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 40k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 40k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 500k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 500k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_500k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_500k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_500k"]}
|
google/multiberts-seed_4-step_500k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_500k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 500k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 500k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 500k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 500k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_600k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_600k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_600k"]}
|
google/multiberts-seed_4-step_600k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_600k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 600k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 600k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 60k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 60k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_60k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_60k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_60k"]}
|
google/multiberts-seed_4-step_60k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_60k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 60k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 60k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 60k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 60k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 700k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 700k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_700k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_700k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_700k"]}
|
google/multiberts-seed_4-step_700k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_700k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 700k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 700k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 700k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 700k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_800k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_800k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_800k"]}
|
google/multiberts-seed_4-step_800k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_800k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 800k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 800k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 80k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 80k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_80k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_80k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_80k"]}
|
google/multiberts-seed_4-step_80k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_80k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_80k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 80k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 80k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 80k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 80k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_80k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 80k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 80k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 900k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 900k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_900k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_900k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4", "multiberts-seed_4-step_900k"]}
|
google/multiberts-seed_4-step_900k
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"multiberts-seed_4-step_900k",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 900k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4, captured at step 900k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 900k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #multiberts-seed_4-step_900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4, captured at step 900k (max: 2000k, i.e., 2M steps).",
"## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 4
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4')
model = TFBertModel.from_pretrained("google/multiberts-seed_4")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4')
model = BertModel.from_pretrained("google/multiberts-seed_4")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_4"]}
|
google/multiberts-seed_4
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_4",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 4
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #4.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 4\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_4 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 4\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #4.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 5
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #5.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_5')
model = TFBertModel.from_pretrained("google/multiberts-seed_5")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_5')
model = BertModel.from_pretrained("google/multiberts-seed_5")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_5"]}
|
google/multiberts-seed_5
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_5",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_5 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 5
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #5.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 5\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #5.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_5 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 5\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #5.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 6
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #6.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_6')
model = TFBertModel.from_pretrained("google/multiberts-seed_6")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_6')
model = BertModel.from_pretrained("google/multiberts-seed_6")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_6"]}
|
google/multiberts-seed_6
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_6",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_6 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 6
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #6.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 6\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #6.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_6 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 6\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #6.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 7
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #7.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_7')
model = TFBertModel.from_pretrained("google/multiberts-seed_7")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_7')
model = BertModel.from_pretrained("google/multiberts-seed_7")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_7"]}
|
google/multiberts-seed_7
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_7",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_7 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 7
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #7.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 7\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #7.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_7 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 7\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #7.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 8
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #8.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_8')
model = TFBertModel.from_pretrained("google/multiberts-seed_8")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_8')
model = BertModel.from_pretrained("google/multiberts-seed_8")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_8"]}
|
google/multiberts-seed_8
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_8",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_8 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 8
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #8.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 8\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #8.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_8 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 8\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #8.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
null |
transformers
|
# MultiBERTs - Seed 9
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #9.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_9')
model = TFBertModel.from_pretrained("google/multiberts-seed_9")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_9')
model = BertModel.from_pretrained("google/multiberts-seed_9")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_9"]}
|
google/multiberts-seed_9
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"multiberts",
"multiberts-seed_9",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16163",
"1908.08962"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_9 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
|
# MultiBERTs - Seed 9
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
the original BERT model but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
URL We describe them in our
paper
The MultiBERTs: BERT Reproductions for Robustness Analysis.
This is model #9.
## Model Description
This model is a reproduction of
BERT-base uncased, for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to BERT-base uncased. Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for Turc et al., 2019.
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our technical report for more details.
### How to use
Using code from
BERT-base uncased, here is an example based on
Tensorflow:
PyTorch version:
info
|
[
"# MultiBERTs - Seed 9\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #9.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_9 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs - Seed 9\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #9.",
"## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.",
"### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo"
] |
fill-mask
|
transformers
|
MuRIL: Multilingual Representations for Indian Languages
===
MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on [TFHub](https://tfhub.dev/google/MuRIL/1) with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this [paper](http://arxiv.org/abs/2103.10730).
## Overview
This model uses a BERT base architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
## Training
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
* Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
* Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
## Uses & Limitations
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pretraining, i.e. 17
Indian languages.
## Evaluation
We provide the results of fine-tuning this model on a set of downstream tasks.<br/>
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/>
We also transliterate the test-sets and evaluate on the same.<br/>
We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].<br/>
For Tatoeba, we do not fine-tune the model, and use the pooled_output of the last layer as the sentence embedding.<br/>
All results are computed in a zero-shot setting, with English being the high resource training set language.
* Shown below are results on datasets from the XTREME benchmark (in %)
<br/>
PANX (F1) | ml | ta | te | en | bn | hi | mr | ur | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 54.77 | 51.24 | 50.16 | 84.40 | 68.59 | 65.13 | 58.44 | 31.36 | 58.01
MuRIL | 75.74 | 71.86 | 64.99 | 84.43 | 85.97 | 78.09 | 74.63 | 85.07 | 77.60
<br/>
UDPOS (F1) | en | hi | mr | ta | te | ur | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 95.35 | 66.09 | 71.27 | 59.58 | 76.98 | 57.85 | 71.19
MuRIL | 95.55 | 64.47 | 82.95 | 62.57 | 85.63 | 58.93 | 75.02
<br/>
XNLI (Accuracy) | en | hi | ur | Average
:-------------- | ----: | ----: | ----: | ------:
mBERT | 81.72 | 60.52 | 58.20 | 66.81
MuRIL | 83.85 | 70.66 | 67.70 | 74.07
<br/>
Tatoeba (Accuracy) | ml | ta | te | bn | hi | mr | ur | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 20.23 | 12.38 | 14.96 | 12.80 | 27.80 | 18.00 | 22.70 | 18.41
MuRIL | 26.35 | 36.81 | 17.52 | 20.20 | 31.50 | 26.60 | 17.10 | 25.15
<br/>
XQUAD (F1/EM) | en | hi | Average
:------------ | ----------: | ----------: | ----------:
mBERT | 83.85/72.86 | 58.46/43.53 | 71.15/58.19
MuRIL | 84.31/72.94 | 73.93/58.32 | 79.12/65.63
<br/>
MLQA (F1/EM) | en | hi | Average
:----------- | ----------: | ----------: | ----------:
mBERT | 80.39/67.30 | 50.28/35.18 | 65.34/51.24
MuRIL | 80.28/67.37 | 67.34/50.22 | 73.81/58.80
<br/>
TyDiQA (F1/EM) | en | bn | te | Average
:---------------- | ----------: | ----------: | ----------: | ----------:
mBERT | 75.21/65.00 | 60.62/45.13 | 53.55/44.54 | 63.13/51.66
MuRIL | 74.10/64.55 | 78.03/66.37 | 73.95/46.94 | 75.36/59.28
* Shown below are results on the transliterated versions of the above
test-sets.
PANX (F1) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 7.53 | 1.04 | 8.24 | 41.77 | 25.46 | 8.34 | 7.30 | 14.24
MuRIL | 63.39 | 7.00 | 53.62 | 72.94 | 69.75 | 68.77 | 68.41 | 57.70
<br/>
UDPOS (F1) | hi_tr | mr_tr | ta_tr | te_tr | ur_tr | Average
:--------- | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 25.00 | 33.67 | 24.02 | 36.21 | 22.07 | 28.20
MuRIL | 63.09 | 67.19 | 58.40 | 65.30 | 56.49 | 62.09
<br/>
XNLI (Accuracy) | hi_tr | ur_tr | Average
:-------------- | ----: | ----: | ------:
mBERT | 39.6 | 38.86 | 39.23
MuRIL | 68.24 | 61.16 | 64.70
<br/>
Tatoeba (Accuracy) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average
:----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------:
mBERT | 2.18 | 1.95 | 5.13 | 1.80 | 3.00 | 2.40 | 2.30 | 2.68
MuRIL | 10.33 | 11.07 | 11.54 | 8.10 | 14.90 | 7.20 | 13.70 | 10.98
<br/>
## References
\[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint
arXiv:1810.04805, 2018.
\[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia)
\[3]: [Common Crawl](http://commoncrawl.org/the-data/)
\[4]:
[PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html)
\[5]: [Dakshina](https://github.com/google-research-datasets/dakshina)
\[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
\[7]: Conneau, Alexis, et al.
[Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf).
arXiv preprint arXiv:1911.02116 (2019).
\[8]: [IndicTrans](https://github.com/libindic/indic-trans)
\[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv
preprint arXiv:2003.11080.
\[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
[FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf)
arXiv preprint arXiv:2009.05166.
## Citation
If you find MuRIL useful in your applications, please cite the following paper:
```
@misc{khanuja2021muril,
title={MuRIL: Multilingual Representations for Indian Languages},
author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar},
year={2021},
eprint={2103.10730},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please mail your queries/feedback to muril-contact@google.com.
|
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
|
google/muril-base-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2103.10730",
"arxiv:1810.04805",
"arxiv:1911.02116",
"arxiv:2003.11080",
"arxiv:2009.05166",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.10730",
"1810.04805",
"1911.02116",
"2003.11080",
"2009.05166"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2103.10730 #arxiv-1810.04805 #arxiv-1911.02116 #arxiv-2003.11080 #arxiv-2009.05166 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
MuRIL: Multilingual Representations for Indian Languages
========================================================
MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on TFHub with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this paper.
Overview
--------
This model uses a BERT base architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
Training
--------
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
+ Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
+ Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
Uses & Limitations
------------------
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pretraining, i.e. 17
Indian languages.
Evaluation
----------
We provide the results of fine-tuning this model on a set of downstream tasks.
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.
We also transliterate the test-sets and evaluate on the same.
We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].
For Tatoeba, we do not fine-tune the model, and use the pooled\_output of the last layer as the sentence embedding.
All results are computed in a zero-shot setting, with English being the high resource training set language.
* Shown below are results on datasets from the XTREME benchmark (in %)
* Shown below are results on the transliterated versions of the above
test-sets.
References
----------
[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding. arXiv preprint
arXiv:1810.04805, 2018.
[2]: Wikipedia
[3]: Common Crawl
[4]:
PMINDIA
[5]: Dakshina
[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
[7]: Conneau, Alexis, et al.
Unsupervised cross-lingual representation learning at scale.
arXiv preprint arXiv:1911.02116 (2019).
[8]: IndicTrans
[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization. arXiv
preprint arXiv:2003.11080.
[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.
arXiv preprint arXiv:2009.05166.
If you find MuRIL useful in your applications, please cite the following paper:
Contact
-------
Please mail your queries/feedback to muril-contact@URL.
|
[
"### Trainable parameters\n\n\nAll parameters in the module are trainable, and fine-tuning all parameters is\nthe recommended practice.\n\n\nUses & Limitations\n------------------\n\n\nThis model is intended to be used for a variety of downstream NLP tasks for\nIndian languages. This model is trained on transliterated data as well, a\nphenomomenon commonly observed in the Indian context. This model is not expected\nto perform well on languages other than the ones used in pretraining, i.e. 17\nIndian languages.\n\n\nEvaluation\n----------\n\n\nWe provide the results of fine-tuning this model on a set of downstream tasks. \n\nWe choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets. \n\nWe also transliterate the test-sets and evaluate on the same. \n\nWe use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10]. \n\nFor Tatoeba, we do not fine-tune the model, and use the pooled\\_output of the last layer as the sentence embedding. \n\nAll results are computed in a zero-shot setting, with English being the high resource training set language.\n\n\n* Shown below are results on datasets from the XTREME benchmark (in %)\n* Shown below are results on the transliterated versions of the above\ntest-sets.\n\n\nReferences\n----------\n\n\n[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:\nPre-training of Deep Bidirectional Transformers for Language\nUnderstanding. arXiv preprint\narXiv:1810.04805, 2018.\n\n\n[2]: Wikipedia\n\n\n[3]: Common Crawl\n\n\n[4]:\nPMINDIA\n\n\n[5]: Dakshina\n\n\n[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),\nKannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya\n(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu\n(ur).\n\n\n[7]: Conneau, Alexis, et al.\nUnsupervised cross-lingual representation learning at scale.\narXiv preprint arXiv:1911.02116 (2019).\n\n\n[8]: IndicTrans\n\n\n[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.\n(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating\ncross-lingual generalization. arXiv\npreprint arXiv:2003.11080.\n\n\n[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).\nFILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.\narXiv preprint arXiv:2009.05166.\n\n\nIf you find MuRIL useful in your applications, please cite the following paper:\n\n\nContact\n-------\n\n\nPlease mail your queries/feedback to muril-contact@URL."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2103.10730 #arxiv-1810.04805 #arxiv-1911.02116 #arxiv-2003.11080 #arxiv-2009.05166 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Trainable parameters\n\n\nAll parameters in the module are trainable, and fine-tuning all parameters is\nthe recommended practice.\n\n\nUses & Limitations\n------------------\n\n\nThis model is intended to be used for a variety of downstream NLP tasks for\nIndian languages. This model is trained on transliterated data as well, a\nphenomomenon commonly observed in the Indian context. This model is not expected\nto perform well on languages other than the ones used in pretraining, i.e. 17\nIndian languages.\n\n\nEvaluation\n----------\n\n\nWe provide the results of fine-tuning this model on a set of downstream tasks. \n\nWe choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets. \n\nWe also transliterate the test-sets and evaluate on the same. \n\nWe use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10]. \n\nFor Tatoeba, we do not fine-tune the model, and use the pooled\\_output of the last layer as the sentence embedding. \n\nAll results are computed in a zero-shot setting, with English being the high resource training set language.\n\n\n* Shown below are results on datasets from the XTREME benchmark (in %)\n* Shown below are results on the transliterated versions of the above\ntest-sets.\n\n\nReferences\n----------\n\n\n[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:\nPre-training of Deep Bidirectional Transformers for Language\nUnderstanding. arXiv preprint\narXiv:1810.04805, 2018.\n\n\n[2]: Wikipedia\n\n\n[3]: Common Crawl\n\n\n[4]:\nPMINDIA\n\n\n[5]: Dakshina\n\n\n[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),\nKannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya\n(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu\n(ur).\n\n\n[7]: Conneau, Alexis, et al.\nUnsupervised cross-lingual representation learning at scale.\narXiv preprint arXiv:1911.02116 (2019).\n\n\n[8]: IndicTrans\n\n\n[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.\n(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating\ncross-lingual generalization. arXiv\npreprint arXiv:2003.11080.\n\n\n[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).\nFILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.\narXiv preprint arXiv:2009.05166.\n\n\nIf you find MuRIL useful in your applications, please cite the following paper:\n\n\nContact\n-------\n\n\nPlease mail your queries/feedback to muril-contact@URL."
] |
feature-extraction
|
transformers
|
# MuRIL Large
Multilingual Representations for Indian Languages : A BERT Large (24L) model pre-trained on 17 Indian languages, and their transliterated counterparts.
## Overview
This model uses a BERT large architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
## Training
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
* Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
* Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1500K steps, with a batch size of 8192, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
## Uses & Limitations
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pre-training, i.e. 17
Indian languages.
## Evaluation
We provide the results of fine-tuning this model on a set of downstream tasks.<br/>
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/>
All results are computed in a zero-shot setting, with English being the high resource training set language.<br/>
The results for XLM-R (Large) are taken from the XTREME paper [9].
* Shown below are results on datasets from the XTREME benchmark (in %)
<br/>
PANX (F1) | bn | en | hi | ml | mr | ta | te | ur | Average
:------------ | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ------:
XLM-R (large) | 78.8 | 84.7 | 73.0 | 67.8 | 68.1 | 59.5 | 55.8 | 56.4 | 68.0
MuRIL (large) | 85.8 | 85.0 | 78.3 | 75.6 | 77.3 | 71.1 | 65.6 | 83.0 | 77.7
<br/>
UDPOS (F1) | en | hi | mr | ta | te | ur | Average
:------------ | ---: | ---: | ---: | ---: | ---: | ---: | ------:
XLM-R (large) | 96.1 | 76.4 | 80.8 | 65.2 | 86.6 | 70.3 | 79.2
MuRIL (large) | 95.7 | 71.3 | 85.7 | 62.6 | 85.8 | 62.8 | 77.3
<br/>
XNLI (Accuracy) | en | hi | ur | Average
:-------------- | ---: | ---: | ---: | ------:
XLM-R (large) | 88.7 | 75.6 | 71.7 | 78.7
MuRIL (large) | 88.4 | 75.8 | 71.7 | 78.6
<br/>
XQUAD (F1/EM) | en | hi | Average
:------------ | --------: | --------: | --------:
XLM-R (large) | 86.5/75.7 | 76.7/59.7 | 81.6/67.7
MuRIL (large) | 88.2/77.8 | 78.4/62.4 | 83.3/70.1
<br/>
MLQA (F1/EM) | en | hi | Average
:------------ | --------: | --------: | --------:
XLM-R (large) | 83.5/70.6 | 70.6/53.1 | 77.1/61.9
MuRIL (large) | 84.4/71.7 | 72.2/54.1 | 78.3/62.9
<br/>
TyDiQA (F1/EM) | en | bn | te | Average
:------------- | --------: | --------: | --------: | --------:
XLM-R (large) | 71.5/56.8 | 64.0/47.8 | 70.1/43.6 | 68.5/49.4
MuRIL (large) | 75.9/66.8 | 67.1/53.1 | 71.5/49.8 | 71.5/56.6
<br/>
The fine-tuning hyperparameters are as follows:
Task | Batch Size | Learning Rate | Epochs | Warm-up Ratio
:----- | ---------: | ------------: | -----: | ------------:
PANX | 32 | 2e-5 | 10 | 0.1
UDPOS | 64 | 5e-6 | 10 | 0.1
XNLI | 128 | 2e-5 | 5 | 0.1
XQuAD | 32 | 3e-5 | 2 | 0.1
MLQA | 32 | 3e-5 | 2 | 0.1
TyDiQA | 32 | 3e-5 | 3 | 0.1
## References
\[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint
arXiv:1810.04805, 2018.
\[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia)
\[3]: [Common Crawl](http://commoncrawl.org/the-data/)
\[4]:
[PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html)
\[5]: [Dakshina](https://github.com/google-research-datasets/dakshina)
\[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
\[7]: Conneau, Alexis, et al.
[Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf).
arXiv preprint arXiv:1911.02116 (2019).
\[8]: [IndicTrans](https://github.com/libindic/indic-trans)
\[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv
preprint arXiv:2003.11080.
\[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
[FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf)
arXiv preprint arXiv:2009.05166.
## Citation
If you find MuRIL useful in your applications, please cite the following paper:
```
@misc{khanuja2021muril,
title={MuRIL: Multilingual Representations for Indian Languages},
author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar},
year={2021},
eprint={2103.10730},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please mail your queries/feedback to muril-contact@google.com.
|
{}
|
google/muril-large-cased
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:1810.04805",
"arxiv:1911.02116",
"arxiv:2003.11080",
"arxiv:2009.05166",
"arxiv:2103.10730",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805",
"1911.02116",
"2003.11080",
"2009.05166",
"2103.10730"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-1810.04805 #arxiv-1911.02116 #arxiv-2003.11080 #arxiv-2009.05166 #arxiv-2103.10730 #endpoints_compatible #region-us
|
MuRIL Large
===========
Multilingual Representations for Indian Languages : A BERT Large (24L) model pre-trained on 17 Indian languages, and their transliterated counterparts.
Overview
--------
This model uses a BERT large architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
Training
--------
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
+ Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
+ Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1500K steps, with a batch size of 8192, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
Uses & Limitations
------------------
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pre-training, i.e. 17
Indian languages.
Evaluation
----------
We provide the results of fine-tuning this model on a set of downstream tasks.
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.
All results are computed in a zero-shot setting, with English being the high resource training set language.
The results for XLM-R (Large) are taken from the XTREME paper [9].
* Shown below are results on datasets from the XTREME benchmark (in %)
The fine-tuning hyperparameters are as follows:
References
----------
[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding. arXiv preprint
arXiv:1810.04805, 2018.
[2]: Wikipedia
[3]: Common Crawl
[4]:
PMINDIA
[5]: Dakshina
[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
[7]: Conneau, Alexis, et al.
Unsupervised cross-lingual representation learning at scale.
arXiv preprint arXiv:1911.02116 (2019).
[8]: IndicTrans
[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization. arXiv
preprint arXiv:2003.11080.
[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.
arXiv preprint arXiv:2009.05166.
If you find MuRIL useful in your applications, please cite the following paper:
Contact
-------
Please mail your queries/feedback to muril-contact@URL.
|
[
"### Trainable parameters\n\n\nAll parameters in the module are trainable, and fine-tuning all parameters is\nthe recommended practice.\n\n\nUses & Limitations\n------------------\n\n\nThis model is intended to be used for a variety of downstream NLP tasks for\nIndian languages. This model is trained on transliterated data as well, a\nphenomenon commonly observed in the Indian context. This model is not expected\nto perform well on languages other than the ones used in pre-training, i.e. 17\nIndian languages.\n\n\nEvaluation\n----------\n\n\nWe provide the results of fine-tuning this model on a set of downstream tasks. \n\nWe choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets. \n\nAll results are computed in a zero-shot setting, with English being the high resource training set language. \n\nThe results for XLM-R (Large) are taken from the XTREME paper [9].\n\n\n* Shown below are results on datasets from the XTREME benchmark (in %)\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\nThe fine-tuning hyperparameters are as follows:\n\n\nReferences\n----------\n\n\n[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:\nPre-training of Deep Bidirectional Transformers for Language\nUnderstanding. arXiv preprint\narXiv:1810.04805, 2018.\n\n\n[2]: Wikipedia\n\n\n[3]: Common Crawl\n\n\n[4]:\nPMINDIA\n\n\n[5]: Dakshina\n\n\n[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),\nKannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya\n(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu\n(ur).\n\n\n[7]: Conneau, Alexis, et al.\nUnsupervised cross-lingual representation learning at scale.\narXiv preprint arXiv:1911.02116 (2019).\n\n\n[8]: IndicTrans\n\n\n[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.\n(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating\ncross-lingual generalization. arXiv\npreprint arXiv:2003.11080.\n\n\n[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).\nFILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.\narXiv preprint arXiv:2009.05166.\n\n\nIf you find MuRIL useful in your applications, please cite the following paper:\n\n\nContact\n-------\n\n\nPlease mail your queries/feedback to muril-contact@URL."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-1810.04805 #arxiv-1911.02116 #arxiv-2003.11080 #arxiv-2009.05166 #arxiv-2103.10730 #endpoints_compatible #region-us \n",
"### Trainable parameters\n\n\nAll parameters in the module are trainable, and fine-tuning all parameters is\nthe recommended practice.\n\n\nUses & Limitations\n------------------\n\n\nThis model is intended to be used for a variety of downstream NLP tasks for\nIndian languages. This model is trained on transliterated data as well, a\nphenomenon commonly observed in the Indian context. This model is not expected\nto perform well on languages other than the ones used in pre-training, i.e. 17\nIndian languages.\n\n\nEvaluation\n----------\n\n\nWe provide the results of fine-tuning this model on a set of downstream tasks. \n\nWe choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets. \n\nAll results are computed in a zero-shot setting, with English being the high resource training set language. \n\nThe results for XLM-R (Large) are taken from the XTREME paper [9].\n\n\n* Shown below are results on datasets from the XTREME benchmark (in %)\n \n\n\n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\n\n \n\nThe fine-tuning hyperparameters are as follows:\n\n\nReferences\n----------\n\n\n[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT:\nPre-training of Deep Bidirectional Transformers for Language\nUnderstanding. arXiv preprint\narXiv:1810.04805, 2018.\n\n\n[2]: Wikipedia\n\n\n[3]: Common Crawl\n\n\n[4]:\nPMINDIA\n\n\n[5]: Dakshina\n\n\n[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),\nKannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya\n(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu\n(ur).\n\n\n[7]: Conneau, Alexis, et al.\nUnsupervised cross-lingual representation learning at scale.\narXiv preprint arXiv:1911.02116 (2019).\n\n\n[8]: IndicTrans\n\n\n[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.\n(2020). Xtreme: A massively multilingual multi-task benchmark for evaluating\ncross-lingual generalization. arXiv\npreprint arXiv:2003.11080.\n\n\n[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).\nFILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.\narXiv preprint arXiv:2009.05166.\n\n\nIf you find MuRIL useful in your applications, please cite the following paper:\n\n\nContact\n-------\n\n\nPlease mail your queries/feedback to muril-contact@URL."
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-aeslc
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-arxiv
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-billsum
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-cnn_dailymail
| null |
[
"transformers",
"pytorch",
"rust",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #rust #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #rust #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"], "datasets": ["gigaword"]}
|
google/pegasus-gigaword
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:gigaword",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #dataset-gigaword #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #dataset-gigaword #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-large
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-multi_news
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-newsroom
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-pubmed
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-reddit_tifu
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
google/pegasus-wikihow
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"], "model-index": [{"name": "google/pegasus-xsum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "train"}, "metrics": [{"type": "rouge", "value": 21.8096, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 4.2525, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 17.4469, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 18.8907, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 3.0317161083221436, "name": "loss", "verified": true}, {"type": "gen_len", "value": 20.3122, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 46.8623, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 24.4533, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 39.0548, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 39.0994, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 1.5717021226882935, "name": "loss", "verified": true}, {"type": "gen_len", "value": 22.8821, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "config": "3.0.0", "split": "test"}, "metrics": [{"type": "rouge", "value": 22.2062, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 7.6701, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 15.4046, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 19.2182, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.681241273880005, "name": "loss", "verified": true}, {"type": "gen_len", "value": 25.0234, "name": "gen_len", "verified": true}]}]}]}
|
google/pegasus-xsum
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #pegasus #text2text-generation #summarization #en #arxiv-1912.08777 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
null |
transformers
|
# realm-cc-news-pretrained-embedder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmEmbedder
embedder = RealmEmbedder.from_pretrained("qqaatw/realm-cc-news-pretrained-embedder")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-cc-news-pretrained-embedder
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-cc-news-pretrained-embedder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-cc-news-pretrained-embedder",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-cc-news-pretrained-embedder",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-cc-news-pretrained-encoder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmKnowledgeAugEncoder
encoder = RealmKnowledgeAugEncoder.from_pretrained("qqaatw/realm-cc-news-pretrained-encoder")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-cc-news-pretrained-encoder
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-cc-news-pretrained-encoder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-cc-news-pretrained-encoder",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-cc-news-pretrained-encoder",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-cc-news-pretrained-openqa
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmForOpenQA
openqa = RealmForOpenQA.from_pretrained("qqaatw/realm-cc-news-pretrained-openqa")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-cc-news-pretrained-openqa
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-cc-news-pretrained-openqa
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-cc-news-pretrained-openqa",
"## Model description\r\n\r\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-cc-news-pretrained-openqa",
"## Model description\r\n\r\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-cc-news-pretrained-scorer
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmScorer
scorer = RealmScorer.from_pretrained("qqaatw/realm-cc-news-pretrained-scorer")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-cc-news-pretrained-scorer
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-cc-news-pretrained-scorer
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-cc-news-pretrained-scorer",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-cc-news-pretrained-scorer",
"## Model description\n\nThe REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.\n\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-orqa-nq-openqa
## Model description
The REALM checkpoint finetuned with Natural Questions(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmForOpenQA
openqa = RealmForOpenQA.from_pretrained("qqaatw/realm-orqa-nq-openqa")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-orqa-nq-openqa
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# realm-orqa-nq-openqa
## Model description
The REALM checkpoint finetuned with Natural Questions(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-orqa-nq-openqa",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Natural Questions(NQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# realm-orqa-nq-openqa",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Natural Questions(NQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-orqa-nq-reader
## Model description
The REALM checkpoint finetuned with Natural Question(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmReader
reader = RealmReader.from_pretrained("qqaatw/realm-orqa-nq-reader")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-orqa-nq-reader
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-orqa-nq-reader
## Model description
The REALM checkpoint finetuned with Natural Question(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-orqa-nq-reader",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Natural Question(NQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-orqa-nq-reader",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Natural Question(NQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-orqa-wq-openqa
## Model description
The REALM checkpoint finetuned with Web Questions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmForOpenQA
openqa = RealmForOpenQA.from_pretrained("qqaatw/realm-orqa-wq-openqa")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-orqa-wq-openqa
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-orqa-wq-openqa
## Model description
The REALM checkpoint finetuned with Web Questions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-orqa-wq-openqa",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Web Questions(WQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-orqa-wq-openqa",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with Web Questions(WQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
null |
transformers
|
# realm-orqa-wq-reader
## Model description
The REALM checkpoint finetuned with WebQuestions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmReader
reader = RealmReader.from_pretrained("qqaatw/realm-orqa-wq-reader")
```
|
{"language": "en", "license": "apache-2.0"}
|
google/realm-orqa-wq-reader
| null |
[
"transformers",
"pytorch",
"realm",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us
|
# realm-orqa-wq-reader
## Model description
The REALM checkpoint finetuned with WebQuestions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found here.
## Usage
|
[
"# realm-orqa-wq-reader",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with WebQuestions(WQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #realm #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# realm-orqa-wq-reader",
"## Model description\r\n\r\nThe REALM checkpoint finetuned with WebQuestions(WQ) dataset, converted from the TF checkpoint provided by Google Language.\r\n\r\nThe original paper, code, and checkpoints can be found here.",
"## Usage"
] |
text-generation
|
transformers
|
## Reformer Model trained on "Crime and Punishment"
Crime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English.
Crime and Punishment training data was taken from `gs://trax-ml/reformer/crime-and-punishment-2554.txt` and contains
roughly 0.5M tokens.
The ReformerLM model was trained in flax using colab notebook proposed by authors: https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb and the weights were converted to Hugging Face's PyTorch ReformerLM model `ReformerModelWithLMHead`.
The model is a language model that operates on small sub-word units. Text can be generated as follows:
```python
model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment")
tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), do_sample=True,temperature=0.7, max_length=100)[0])
# gives:'A few months later on was more than anything in the flat.
# “I have already.” “That’s not my notion that he had forgotten him.
# What does that matter? And why do you mean? It’s only another fellow,” he said as he went out, as though he want'
```
|
{}
|
google/reformer-crime-and-punishment
| null |
[
"transformers",
"pytorch",
"rust",
"reformer",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #rust #reformer #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Reformer Model trained on "Crime and Punishment"
Crime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English.
Crime and Punishment training data was taken from 'gs://trax-ml/reformer/URL' and contains
roughly 0.5M tokens.
The ReformerLM model was trained in flax using colab notebook proposed by authors: URL and the weights were converted to Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.
The model is a language model that operates on small sub-word units. Text can be generated as follows:
|
[
"## Reformer Model trained on \"Crime and Punishment\" \n\nCrime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English. \n\nCrime and Punishment training data was taken from 'gs://trax-ml/reformer/URL' and contains \nroughly 0.5M tokens. \n\nThe ReformerLM model was trained in flax using colab notebook proposed by authors: URL and the weights were converted to Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.\n\nThe model is a language model that operates on small sub-word units. Text can be generated as follows:"
] |
[
"TAGS\n#transformers #pytorch #rust #reformer #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Reformer Model trained on \"Crime and Punishment\" \n\nCrime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English. \n\nCrime and Punishment training data was taken from 'gs://trax-ml/reformer/URL' and contains \nroughly 0.5M tokens. \n\nThe ReformerLM model was trained in flax using colab notebook proposed by authors: URL and the weights were converted to Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.\n\nThe model is a language model that operates on small sub-word units. Text can be generated as follows:"
] |
text-generation
|
transformers
|
## Reformer Language model on character level and trained on enwik8.
*enwik8* is a dataset based on Wikipedia and is often used to measure the model's ability to *compress* data, *e.g.* in
the scope of the *Hutter prize*: https://en.wikipedia.org/wiki/Hutter_Prize.
`reformer-enwik8` was pretrained on the first 90M chars of *enwik8* whereas the text was chunked into batches of size 65536 chars (=2^16).
The model's weights were taken from https://console.cloud.google.com/storage/browser/trax-ml/reformer/enwik8 and converted
to Hugging Face's PyTorch ReformerLM model `ReformerModelWithLMHead`.
The model is a language model that operates on characters.
Therefore, this model does not need a tokenizer. The following function can instead be used for **encoding** and **decoding**:
```python
import torch
# Encoding
def encode(list_of_strings, pad_token_id=0):
max_length = max([len(string) for string in list_of_strings])
# create emtpy tensors
attention_masks = torch.zeros((len(list_of_strings), max_length), dtype=torch.long)
input_ids = torch.full((len(list_of_strings), max_length), pad_token_id, dtype=torch.long)
for idx, string in enumerate(list_of_strings):
# make sure string is in byte format
if not isinstance(string, bytes):
string = str.encode(string)
input_ids[idx, :len(string)] = torch.tensor([x + 2 for x in string])
attention_masks[idx, :len(string)] = 1
return input_ids, attention_masks
# Decoding
def decode(outputs_ids):
decoded_outputs = []
for output_ids in outputs_ids.tolist():
# transform id back to char IDs < 2 are simply transformed to ""
decoded_outputs.append("".join([chr(x - 2) if x > 1 else "" for x in output_ids]))
return decoded_outputs
```
Text can be generated as follows:
```python
from transformers import ReformerModelWithLMHead
model = ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8")
encoded, attention_masks = encode(["In 1965, Brooks left IBM to found the Department of"])
decode(model.generate(encoded, do_sample=True, max_length=150))
# gives:
# In 1965, Brooks left IBM to found the Department of Journalism in 1968. IBM had jurisdiction himself in 1980, while Brooks resolved, nevertheless thro
```
***Note***: Language generation using `ReformerModelWithLMHead` is not optimized yet and is rather slow.
|
{}
|
google/reformer-enwik8
| null |
[
"transformers",
"pytorch",
"reformer",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #reformer #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Reformer Language model on character level and trained on enwik8.
*enwik8* is a dataset based on Wikipedia and is often used to measure the model's ability to *compress* data, *e.g.* in
the scope of the *Hutter prize*: URL
'reformer-enwik8' was pretrained on the first 90M chars of *enwik8* whereas the text was chunked into batches of size 65536 chars (=2^16).
The model's weights were taken from URL and converted
to Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.
The model is a language model that operates on characters.
Therefore, this model does not need a tokenizer. The following function can instead be used for encoding and decoding:
Text can be generated as follows:
*Note*: Language generation using 'ReformerModelWithLMHead' is not optimized yet and is rather slow.
|
[
"## Reformer Language model on character level and trained on enwik8. \n\n*enwik8* is a dataset based on Wikipedia and is often used to measure the model's ability to *compress* data, *e.g.* in \nthe scope of the *Hutter prize*: URL\n\n'reformer-enwik8' was pretrained on the first 90M chars of *enwik8* whereas the text was chunked into batches of size 65536 chars (=2^16).\nThe model's weights were taken from URL and converted \nto Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.\n\nThe model is a language model that operates on characters. \nTherefore, this model does not need a tokenizer. The following function can instead be used for encoding and decoding:\n\n\n\nText can be generated as follows:\n\n\n\n*Note*: Language generation using 'ReformerModelWithLMHead' is not optimized yet and is rather slow."
] |
[
"TAGS\n#transformers #pytorch #reformer #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Reformer Language model on character level and trained on enwik8. \n\n*enwik8* is a dataset based on Wikipedia and is often used to measure the model's ability to *compress* data, *e.g.* in \nthe scope of the *Hutter prize*: URL\n\n'reformer-enwik8' was pretrained on the first 90M chars of *enwik8* whereas the text was chunked into batches of size 65536 chars (=2^16).\nThe model's weights were taken from URL and converted \nto Hugging Face's PyTorch ReformerLM model 'ReformerModelWithLMHead'.\n\nThe model is a language model that operates on characters. \nTherefore, this model does not need a tokenizer. The following function can instead be used for encoding and decoding:\n\n\n\nText can be generated as follows:\n\n\n\n*Note*: Language generation using 'ReformerModelWithLMHead' is not optimized yet and is rather slow."
] |
null |
transformers
|
# RemBERT (for classification)
Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821). A direct export of the model checkpoint was first made available in [this repository](https://github.com/google-research/google-research/tree/master/rembert). This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
## Model description
RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
## Intended uses & limitations
You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our [paper](https://arxiv.org/abs/2010.12821), we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
## Training data
The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on [this repository](https://github.com/google-research/google-research/tree/master/rembert)
### BibTeX entry and citation info
```bibtex
@inproceedings{DBLP:conf/iclr/ChungFTJR21,
author = {Hyung Won Chung and
Thibault F{\'{e}}vry and
Henry Tsai and
Melvin Johnson and
Sebastian Ruder},
title = {Rethinking Embedding Coupling in Pre-trained Language Models},
booktitle = {9th International Conference on Learning Representations, {ICLR} 2021,
Virtual Event, Austria, May 3-7, 2021},
publisher = {OpenReview.net},
year = {2021},
url = {https://openreview.net/forum?id=xpFFI\_NtgpW},
timestamp = {Wed, 23 Jun 2021 17:36:39 +0200},
biburl = {https://dblp.org/rec/conf/iclr/ChungFTJR21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "bs", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["wikipedia"]}
|
google/rembert
| null |
[
"transformers",
"pytorch",
"tf",
"rembert",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:wikipedia",
"arxiv:2010.12821",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.12821"
] |
[
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu"
] |
TAGS
#transformers #pytorch #tf #rembert #multilingual #af #am #ar #az #be #bg #bn #bs #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #hr #ht #hu #hy #id #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #ur #uz #vi #xh #yi #yo #zh #zu #dataset-wikipedia #arxiv-2010.12821 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# RemBERT (for classification)
Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper Rethinking embedding coupling in pre-trained language models. A direct export of the model checkpoint was first made available in this repository. This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
## Model description
RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
## Intended uses & limitations
You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our paper, we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
## Training data
The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on this repository
### BibTeX entry and citation info
|
[
"# RemBERT (for classification) \n\nPretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper Rethinking embedding coupling in pre-trained language models. A direct export of the model checkpoint was first made available in this repository. This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.",
"## Model description\n\nRemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.",
"## Intended uses & limitations\n\nYou should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our paper, we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.",
"## Training data\n\nThe RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on this repository",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #rembert #multilingual #af #am #ar #az #be #bg #bn #bs #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #hr #ht #hu #hy #id #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #ur #uz #vi #xh #yi #yo #zh #zu #dataset-wikipedia #arxiv-2010.12821 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# RemBERT (for classification) \n\nPretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper Rethinking embedding coupling in pre-trained language models. A direct export of the model checkpoint was first made available in this repository. This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.",
"## Model description\n\nRemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.",
"## Intended uses & limitations\n\nYou should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our paper, we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.",
"## Training data\n\nThe RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on this repository",
"### BibTeX entry and citation info"
] |
summarization
|
transformers
|
# Roberta2Roberta_L-24_bbc EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_bbc/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_bbc")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_bbc")
article = """The problem is affecting people using the older
versions of the PlayStation 3, called the "Fat"
model.The problem isn't affecting the newer PS3
Slim systems that have been on sale since
September last year.Sony have also said they are
aiming to have the problem fixed shortly but is
advising some users to avoid using their console
for the time being."We hope to resolve this
problem within the next 24 hours," a statement
reads. "In the meantime, if you have a model other
than the new slim PS3, we advise that you do not
use your PS3 system, as doing so may result in
errors in some functionality, such as recording
obtained trophies, and not being able to restore
certain data."We believe we have identified that
this problem is being caused by a bug in the clock
functionality incorporated in the system."The
PlayStation Network is used by millions of people
around the world.It allows users to play their
friends at games like Fifa over the internet and
also do things like download software or visit
online stores."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Some Sony PlayStation gamers are being advised to stay away from the network because of a problem with the PlayStation 3 network.
```
|
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["xsum"]}
|
google/roberta2roberta_L-24_bbc
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:xsum",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.12461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-xsum #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Roberta2Roberta_L-24_bbc EncoderDecoder model
The model was introduced in
this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.
The model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder
and decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
|
[
"# Roberta2Roberta_L-24_bbc EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for extreme summarization, *e.g.*"
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-xsum #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Roberta2Roberta_L-24_bbc EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for extreme summarization, *e.g.*"
] |
summarization
|
transformers
|
# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_cnn_daily_mail")
article = """ (The Hollywood Reporter)"The Rocky Horror Picture
Show" is the latest musical getting the small-
screen treatment. Fox is developing a two-hour
remake of the 1975 cult classic to be directed,
executive-produced and choreographed by Kenneth
Ortega ("High School Musical"). The project,
tentatively titled "The Rocky Horror Picture Show
Event," is casting-contingent. The special will be
filmed in advance and not air live, but few
details beyond that are known. In addition to
Ortega, Gail Berman and Lou Adler, who produced
the original film, are also attached as executive
producers. The special will be produced by Fox 21
Television Studios, and Berman's The Jackal Group.
The special is timed to celebrate the 40th
anniversary of the film, which has grossed more
than $112 million and still plays in theaters
across the country. TV premiere dates: The
complete guide . This isn't the first stab at
adapting "The Rocky Horror Picture Show." In 2002,
Fox unveiled plans for an adaptation timed to the
30th anniversary that never came to fruition. The
faces of pilot season 2015 . Fox's "Glee" covered
several of the show's most popular songs for a
Season 2 episode and even released a special "The
Rocky Horror Glee Show" EP. There is no plan yet
for when the adaptation will air. Fox also has a
live musical production of "Grease", starring
Julianne Hough and Vanessa Hudgens, scheduled to
air on Jan. 31, 2016. Broadcast TV scorecard .
Following in the footsteps of "The Sound of Music"
and "Peter Pan," NBC recently announced plans to
air a live version of The Wiz later this year.
Ortega's credits include "Gilmore Girls," "This Is
It" and "Hocus Pocus." He is repped by Paradigm
and Hanson, Jacobson. ©2015 The Hollywood
Reporter. All rights reserved."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Fox is developing a two-hour remake of the 1975 cult classic. The special will be directed, executive-produced and choreographed by Kenneth Ortega.
# The special is timed to celebrate the 40th anniversary of the film, which has grossed more than $112 million.
```
|
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["cnn_dailymail"]}
|
google/roberta2roberta_L-24_cnn_daily_mail
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.12461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-cnn_dailymail #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model
The model was introduced in
this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.
The model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder
and decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for summarization, *e.g.*
|
[
"# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for summarization, *e.g.*"
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-cnn_dailymail #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Roberta2Roberta_L-24_cnn_daily_mail EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on summarization on the CNN / Dailymail dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for summarization, *e.g.*"
] |
text2text-generation
|
transformers
|
# Roberta2Roberta_L-24_discofuse EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_discofuse/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for sentence fusion, *e.g.*
IMPORTANT: The model was not trained on the `"` (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all `"` (double quotation marks) with a single `` ` `` (single back tick).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_discofuse")
discofuse = """As a run-blocker, Zeitler moves relatively well. Zeitler often struggles at the point of contact in space."""
input_ids = tokenizer(discofuse, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# As a run-blocker, Zeitler moves relatively well. However, Zeitler often struggles at the point of contact in space.
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["discofuse"]}
|
google/roberta2roberta_L-24_discofuse
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:discofuse",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.12461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #en #dataset-discofuse #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Roberta2Roberta_L-24_discofuse EncoderDecoder model
The model was introduced in
this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.
The model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder
and decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for sentence fusion, *e.g.*
IMPORTANT: The model was not trained on the '"' (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all '"' (double quotation marks) with a single '' ' '' (single back tick).
|
[
"# Roberta2Roberta_L-24_discofuse EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for sentence fusion, *e.g.*\n\nIMPORTANT: The model was not trained on the '\"' (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all '\"' (double quotation marks) with a single '' ' '' (single back tick)."
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #en #dataset-discofuse #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Roberta2Roberta_L-24_discofuse EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for sentence fusion, *e.g.*\n\nIMPORTANT: The model was not trained on the '\"' (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all '\"' (double quotation marks) with a single '' ' '' (single back tick)."
] |
summarization
|
transformers
|
# Roberta2Roberta_L-24_gigaword EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_gigaword/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on headline generation using the Gigaword dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_gigaword")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_gigaword")
article = """australian shares closed down #.# percent monday
following a weak lead from the united states and
lower commodity prices , dealers said ."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# australian shares close down #.# percent.
```
|
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["gigaword"]}
|
google/roberta2roberta_L-24_gigaword
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:gigaword",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.12461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-gigaword #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Roberta2Roberta_L-24_gigaword EncoderDecoder model
The model was introduced in
this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.
The model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder
and decoder and fine-tuned on headline generation using the Gigaword dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
|
[
"# Roberta2Roberta_L-24_gigaword EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on headline generation using the Gigaword dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for extreme summarization, *e.g.*"
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #summarization #en #dataset-gigaword #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Roberta2Roberta_L-24_gigaword EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on headline generation using the Gigaword dataset, which is linked above.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for extreme summarization, *e.g.*"
] |
text2text-generation
|
transformers
|
# Roberta2Roberta_L-24_wikisplit EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on sentence splitting on the [WikiSplit](https://github.com/google-research-datasets/wiki-split) dataset.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for sentence splitting, *e.g.*
**IMPORTANT**: The model was not trained on the `"` (double quotation mark) character -> so the before tokenizing the text,
it is advised to replace all `"` (double quotation marks) with two single `'` (single quotation mark).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open Bob's Burgers for customers who were planning on going to Lobsterfest.
```
|
{"language": "en", "license": "apache-2.0"}
|
google/roberta2roberta_L-24_wikisplit
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"arxiv:1907.12461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.12461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #en #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Roberta2Roberta_L-24_wikisplit EncoderDecoder model
The model was introduced in
this paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository.
The model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder
and decoder and fine-tuned on sentence splitting on the WikiSplit dataset.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for sentence splitting, *e.g.*
IMPORTANT: The model was not trained on the '"' (double quotation mark) character -> so the before tokenizing the text,
it is advised to replace all '"' (double quotation marks) with two single ''' (single quotation mark).
|
[
"# Roberta2Roberta_L-24_wikisplit EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on sentence splitting on the WikiSplit dataset.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for sentence splitting, *e.g.*\n\nIMPORTANT: The model was not trained on the '\"' (double quotation mark) character -> so the before tokenizing the text, \nit is advised to replace all '\"' (double quotation marks) with two single ''' (single quotation mark)."
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #en #arxiv-1907.12461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Roberta2Roberta_L-24_wikisplit EncoderDecoder model\n\nThe model was introduced in \nthis paper by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in this repository. \n\nThe model is an encoder-decoder model that was initialized on the 'roberta-large' checkpoints for both the encoder \nand decoder and fine-tuned on sentence splitting on the WikiSplit dataset.\n\nDisclaimer: The model card has been written by the Hugging Face team.",
"## How to use\n\nYou can use this model for sentence splitting, *e.g.*\n\nIMPORTANT: The model was not trained on the '\"' (double quotation mark) character -> so the before tokenizing the text, \nit is advised to replace all '\"' (double quotation marks) with two single ''' (single quotation mark)."
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nq|37.9|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nq**|**36.6**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "natural_questions"], "pipeline_tag": "text2text-generation"}
|
google/t5-11b-ssm-nq
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Natural Questions (NQ).
Note: The model was fine-tuned on 100% of the train splits of Natural Questions (NQ) for 10k steps.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Natural Questions - Test Set
---------------------------------------
Id: T5-small, link: URL, Exact Match:
Id: T5-large, link: URL, Exact Match:
Id: T5-xl, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Id: T5-3b, link: URL, Exact Match:
Id: T5-11b, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nqo|31.7|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nqo**|**34.8**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "natural_questions"]}
|
google/t5-11b-ssm-nqo
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Natural Questions (NQ).
Note: The model was fine-tuned on 90% of the train splits of Natural Questions (NQ) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Natural Questions - Test Set
---------------------------------------
Id: T5-large, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Id: T5-3b, link: URL, Exact Match:
Id: T5-11b, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa).
**Note**: The model was fine-tuned on 100% of the train splits of [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa) for 10 steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Trivia QA - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-large-ssm-tqa**|**60.5**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-tqa|61.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-tqa")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-tqa")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "trivia_qa"]}
|
google/t5-11b-ssm-tqa
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:trivia_qa",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-trivia_qa #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Trivia QA (TQA).
Note: The model was fine-tuned on 100% of the train splits of Trivia QA (TQA) for 10 steps.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Trivia QA - Test Set
-------------------------------
Id: T5-11b, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-trivia_qa #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa).
**Note**: The model was fine-tuned on 90% of the train splits of [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Trivia QA - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-large-ssm-tqao**|**51.0**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-tqao|51.9|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-tqao")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-tqao")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "trivia_qa"]}
|
google/t5-11b-ssm-tqao
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:trivia_qa",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-trivia_qa #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Trivia QA (TQA).
Note: The model was fine-tuned on 90% of the train splits of Trivia QA (TQA) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Trivia QA - Test Set
-------------------------------
Id: T5-11b, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-trivia_qa #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Web Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wq**|**44.7**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wq|43.5|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "web_questions"]}
|
google/t5-11b-ssm-wq
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:web_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-web_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Web Questions (WQ).
Note: The model was fine-tuned on 100% of the train splits of Web Questions (WQ) for 10k steps.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Web Questions - Test Set
-----------------------------------
Id: T5-11b, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-web_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
null | null |
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Web Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wqo**|**40.8**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wqo|42.8|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "web_questions"]}
|
google/t5-11b-ssm-wqo
| null |
[
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:web_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#en #dataset-c4 #dataset-wikipedia #dataset-web_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #has_space #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Web Questions (WQ).
Note: The model was fine-tuned on 90% of the train splits of Web Questions (WQ) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Web Questions - Test Set
-----------------------------------
Id: T5-11b, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#en #dataset-c4 #dataset-wikipedia #dataset-web_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #has_space #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia"]}
|
google/t5-11b-ssm
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4 and subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia.
Note: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[
"## Abstract\n\nIt has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL\n\n!model image"
] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## Abstract\n\nIt has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL\n\n!model image"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nq|37.9|
|**T5-3b**|**https://huggingface.co/google/t5-3b-ssm-nq**|**33.2**|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-3b-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-3b-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "natural_questions"], "pipeline_tag": "text2text-generation"}
|
google/t5-3b-ssm-nq
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Natural Questions (NQ).
Note: The model was fine-tuned on 100% of the train splits of Natural Questions (NQ) for 10k steps.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Natural Questions - Test Set
---------------------------------------
Id: T5-small, link: URL, Exact Match:
Id: T5-large, link: URL, Exact Match:
Id: T5-xl, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Id: T5-3b, link: URL, Exact Match:
Id: T5-11b, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
|**T5-3b**|**https://huggingface.co/google/t5-3b-ssm-nqo**|**31.7**|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nqo|34.8|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-3b-ssm-nqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-3b-ssm-nqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia", "natural_questions"]}
|
google/t5-3b-ssm-nqo
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4, subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia, and finally fine-tuned on Natural Questions (NQ).
Note: The model was fine-tuned on 90% of the train splits of Natural Questions (NQ) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
Results on Natural Questions - Test Set
---------------------------------------
Id: T5-large, link: URL, Exact Match:
Id: T5-xxl, link: URL, Exact Match:
Id: T5-3b, link: URL, Exact Match:
Id: T5-11b, link: URL, Exact Match:
Usage
-----
The model can be used as follows for closed book question answering:
Abstract
--------
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #dataset-natural_questions #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
{"language": "en", "license": "apache-2.0", "datasets": ["c4", "wikipedia"]}
|
google/t5-3b-ssm
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.08909",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 for Closed Book Question Answering.
The model was pre-trained using T5's denoising objective on C4 and subsequently additionally pre-trained using REALM's salient span masking objective on Wikipedia.
Note: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: here
Paper: How Much Knowledge Can You Pack
Into the Parameters of a Language Model?
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL
!model image
|
[
"## Abstract\n\nIt has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL\n\n!model image"
] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #en #dataset-c4 #dataset-wikipedia #arxiv-2002.08909 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## Abstract\n\nIt has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at URL\n\n!model image"
] |
text2text-generation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-base):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Base](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-base)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
{"language": "en", "license": "apache-2.0", "tags": ["t5-lm-adapt"], "datasets": ["c4"]}
|
google/t5-base-lm-adapt
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"t5-lm-adapt",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2002.05202",
"1910.10683"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #t5-lm-adapt #en #dataset-c4 #arxiv-2002.05202 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Google's T5 Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
T5 Version 1.1 - LM Adapted includes the following improvements compared to the original T5 model:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see here.
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger 'd_model' and smaller 'num_heads' and 'd_ff'.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from T5 Version 1.1 - Base
and then trained for an additional 100K steps on the LM objective discussed in the T5 paper.
This adaptation improves the ability of the model to be used for prompt tuning.
Note: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is BigScience's T0pp.
Pretraining Dataset: C4
Other Community Checkpoints: here
Paper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
!model image
|
[
"## Version 1.1 - LM-Adapted\n\nT5 Version 1.1 - LM Adapted includes the following improvements compared to the original T5 model:\n\n- GEGLU activation in feed-forward hidden layer, rather than ReLU - see here.\n\n- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.\n\n- Pre-trained on C4 only without mixing in the downstream tasks.\n\n- no parameter sharing between embedding and classifier layer\n\n- \"xl\" and \"xxl\" replace \"3B\" and \"11B\". The model shapes are a bit different - larger 'd_model' and smaller 'num_heads' and 'd_ff'.\n\nand is pretrained on both the denoising and language modeling objective.\n\nMore specifically, this checkpoint is initialized from T5 Version 1.1 - Base \nand then trained for an additional 100K steps on the LM objective discussed in the T5 paper. \nThis adaptation improves the ability of the model to be used for prompt tuning.\n\nNote: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is BigScience's T0pp.\n\nPretraining Dataset: C4\n\nOther Community Checkpoints: here\n\nPaper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer\n\nAuthors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*",
"## Abstract\n\nTransfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.\n\n!model image"
] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #t5-lm-adapt #en #dataset-c4 #arxiv-2002.05202 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## Version 1.1 - LM-Adapted\n\nT5 Version 1.1 - LM Adapted includes the following improvements compared to the original T5 model:\n\n- GEGLU activation in feed-forward hidden layer, rather than ReLU - see here.\n\n- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.\n\n- Pre-trained on C4 only without mixing in the downstream tasks.\n\n- no parameter sharing between embedding and classifier layer\n\n- \"xl\" and \"xxl\" replace \"3B\" and \"11B\". The model shapes are a bit different - larger 'd_model' and smaller 'num_heads' and 'd_ff'.\n\nand is pretrained on both the denoising and language modeling objective.\n\nMore specifically, this checkpoint is initialized from T5 Version 1.1 - Base \nand then trained for an additional 100K steps on the LM objective discussed in the T5 paper. \nThis adaptation improves the ability of the model to be used for prompt tuning.\n\nNote: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is BigScience's T0pp.\n\nPretraining Dataset: C4\n\nOther Community Checkpoints: here\n\nPaper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer\n\nAuthors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*",
"## Abstract\n\nTransfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.\n\n!model image"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DL2 (Deep-Narrow version)
T5-Efficient-BASE-DL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dl2** - is of model type **Base** with the following variations:
- **dl** is **2**
It has **128.52** million parameters and thus requires *ca.* **514.09 MB** of memory in full precision (*fp32*)
or **257.05 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dl2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DL2 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-DL2 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dl2 - is of model type Base with the following variations:
* dl is 2
It has 128.52 million parameters and thus requires *ca.* 514.09 MB of memory in full precision (*fp32*)
or 257.05 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DL4 (Deep-Narrow version)
T5-Efficient-BASE-DL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dl4** - is of model type **Base** with the following variations:
- **dl** is **4**
It has **147.4** million parameters and thus requires *ca.* **589.62 MB** of memory in full precision (*fp32*)
or **294.81 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dl4
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DL4 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-DL4 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dl4 - is of model type Base with the following variations:
* dl is 4
It has 147.4 million parameters and thus requires *ca.* 589.62 MB of memory in full precision (*fp32*)
or 294.81 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DL6 (Deep-Narrow version)
T5-Efficient-BASE-DL6 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dl6** - is of model type **Base** with the following variations:
- **dl** is **6**
It has **166.29** million parameters and thus requires *ca.* **665.15 MB** of memory in full precision (*fp32*)
or **332.57 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dl6
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DL6 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-DL6 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dl6 - is of model type Base with the following variations:
* dl is 6
It has 166.29 million parameters and thus requires *ca.* 665.15 MB of memory in full precision (*fp32*)
or 332.57 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DL8 (Deep-Narrow version)
T5-Efficient-BASE-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dl8** - is of model type **Base** with the following variations:
- **dl** is **8**
It has **185.17** million parameters and thus requires *ca.* **740.67 MB** of memory in full precision (*fp32*)
or **370.34 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dl8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DL8 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-DL8 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dl8 - is of model type Base with the following variations:
* dl is 8
It has 185.17 million parameters and thus requires *ca.* 740.67 MB of memory in full precision (*fp32*)
or 370.34 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DM1000 (Deep-Narrow version)
T5-Efficient-BASE-DM1000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dm1000** - is of model type **Base** with the following variations:
- **dm** is **1000**
It has **297.23** million parameters and thus requires *ca.* **1188.93 MB** of memory in full precision (*fp32*)
or **594.47 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dm1000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DM1000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-DM1000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dm1000 - is of model type Base with the following variations:
* dm is 1000
It has 297.23 million parameters and thus requires *ca.* 1188.93 MB of memory in full precision (*fp32*)
or 594.47 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DM2000 (Deep-Narrow version)
T5-Efficient-BASE-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dm2000** - is of model type **Base** with the following variations:
- **dm** is **2000**
It has **594.44** million parameters and thus requires *ca.* **2377.75 MB** of memory in full precision (*fp32*)
or **1188.87 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dm2000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DM2000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-DM2000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dm2000 - is of model type Base with the following variations:
* dm is 2000
It has 594.44 million parameters and thus requires *ca.* 2377.75 MB of memory in full precision (*fp32*)
or 1188.87 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DM256 (Deep-Narrow version)
T5-Efficient-BASE-DM256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dm256** - is of model type **Base** with the following variations:
- **dm** is **256**
It has **74.33** million parameters and thus requires *ca.* **297.32 MB** of memory in full precision (*fp32*)
or **148.66 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dm256
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DM256 (Deep-Narrow version)
=============================================
T5-Efficient-BASE-DM256 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dm256 - is of model type Base with the following variations:
* dm is 256
It has 74.33 million parameters and thus requires *ca.* 297.32 MB of memory in full precision (*fp32*)
or 148.66 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-DM512 (Deep-Narrow version)
T5-Efficient-BASE-DM512 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dm512** - is of model type **Base** with the following variations:
- **dm** is **512**
It has **148.63** million parameters and thus requires *ca.* **594.52 MB** of memory in full precision (*fp32*)
or **297.26 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-dm512
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-DM512 (Deep-Narrow version)
=============================================
T5-Efficient-BASE-DM512 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-dm512 - is of model type Base with the following variations:
* dm is 512
It has 148.63 million parameters and thus requires *ca.* 594.52 MB of memory in full precision (*fp32*)
or 297.26 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-EL16 (Deep-Narrow version)
T5-Efficient-BASE-EL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el16** - is of model type **Base** with the following variations:
- **el** is **16**
It has **251.25** million parameters and thus requires *ca.* **1005.01 MB** of memory in full precision (*fp32*)
or **502.51 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-el16
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-EL16 (Deep-Narrow version)
============================================
T5-Efficient-BASE-EL16 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-el16 - is of model type Base with the following variations:
* el is 16
It has 251.25 million parameters and thus requires *ca.* 1005.01 MB of memory in full precision (*fp32*)
or 502.51 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-EL2 (Deep-Narrow version)
T5-Efficient-BASE-EL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el2** - is of model type **Base** with the following variations:
- **el** is **2**
It has **152.13** million parameters and thus requires *ca.* **608.51 MB** of memory in full precision (*fp32*)
or **304.26 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-el2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-EL2 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-EL2 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-el2 - is of model type Base with the following variations:
* el is 2
It has 152.13 million parameters and thus requires *ca.* 608.51 MB of memory in full precision (*fp32*)
or 304.26 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-EL4 (Deep-Narrow version)
T5-Efficient-BASE-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el4** - is of model type **Base** with the following variations:
- **el** is **4**
It has **166.29** million parameters and thus requires *ca.* **665.16 MB** of memory in full precision (*fp32*)
or **332.58 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-el4
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-EL4 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-EL4 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-el4 - is of model type Base with the following variations:
* el is 4
It has 166.29 million parameters and thus requires *ca.* 665.16 MB of memory in full precision (*fp32*)
or 332.58 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-EL6 (Deep-Narrow version)
T5-Efficient-BASE-EL6 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el6** - is of model type **Base** with the following variations:
- **el** is **6**
It has **180.45** million parameters and thus requires *ca.* **721.8 MB** of memory in full precision (*fp32*)
or **360.9 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-el6
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-EL6 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-EL6 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-el6 - is of model type Base with the following variations:
* el is 6
It has 180.45 million parameters and thus requires *ca.* 721.8 MB of memory in full precision (*fp32*)
or 360.9 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-EL8 (Deep-Narrow version)
T5-Efficient-BASE-EL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el8** - is of model type **Base** with the following variations:
- **el** is **8**
It has **194.61** million parameters and thus requires *ca.* **778.44 MB** of memory in full precision (*fp32*)
or **389.22 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-el8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-EL8 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-EL8 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-el8 - is of model type Base with the following variations:
* el is 8
It has 194.61 million parameters and thus requires *ca.* 778.44 MB of memory in full precision (*fp32*)
or 389.22 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-FF1000 (Deep-Narrow version)
T5-Efficient-BASE-FF1000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-ff1000** - is of model type **Base** with the following variations:
- **ff** is **1000**
It has **147.43** million parameters and thus requires *ca.* **589.74 MB** of memory in full precision (*fp32*)
or **294.87 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-ff1000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-FF1000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-FF1000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-ff1000 - is of model type Base with the following variations:
* ff is 1000
It has 147.43 million parameters and thus requires *ca.* 589.74 MB of memory in full precision (*fp32*)
or 294.87 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-FF12000 (Deep-Narrow version)
T5-Efficient-BASE-FF12000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-ff12000** - is of model type **Base** with the following variations:
- **ff** is **12000**
It has **562.67** million parameters and thus requires *ca.* **2250.68 MB** of memory in full precision (*fp32*)
or **1125.34 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-ff12000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-FF12000 (Deep-Narrow version)
===============================================
T5-Efficient-BASE-FF12000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-ff12000 - is of model type Base with the following variations:
* ff is 12000
It has 562.67 million parameters and thus requires *ca.* 2250.68 MB of memory in full precision (*fp32*)
or 1125.34 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-FF2000 (Deep-Narrow version)
T5-Efficient-BASE-FF2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-ff2000** - is of model type **Base** with the following variations:
- **ff** is **2000**
It has **185.18** million parameters and thus requires *ca.* **740.73 MB** of memory in full precision (*fp32*)
or **370.37 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-ff2000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-FF2000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-FF2000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-ff2000 - is of model type Base with the following variations:
* ff is 2000
It has 185.18 million parameters and thus requires *ca.* 740.73 MB of memory in full precision (*fp32*)
or 370.37 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-FF6000 (Deep-Narrow version)
T5-Efficient-BASE-FF6000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-ff6000** - is of model type **Base** with the following variations:
- **ff** is **6000**
It has **336.18** million parameters and thus requires *ca.* **1344.71 MB** of memory in full precision (*fp32*)
or **672.36 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-ff6000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-FF6000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-FF6000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-ff6000 - is of model type Base with the following variations:
* ff is 6000
It has 336.18 million parameters and thus requires *ca.* 1344.71 MB of memory in full precision (*fp32*)
or 672.36 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-FF9000 (Deep-Narrow version)
T5-Efficient-BASE-FF9000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-ff9000** - is of model type **Base** with the following variations:
- **ff** is **9000**
It has **449.42** million parameters and thus requires *ca.* **1797.7 MB** of memory in full precision (*fp32*)
or **898.85 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-ff9000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-FF9000 (Deep-Narrow version)
==============================================
T5-Efficient-BASE-FF9000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-ff9000 - is of model type Base with the following variations:
* ff is 9000
It has 449.42 million parameters and thus requires *ca.* 1797.7 MB of memory in full precision (*fp32*)
or 898.85 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-KV128 (Deep-Narrow version)
T5-Efficient-BASE-KV128 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-kv128** - is of model type **Base** with the following variations:
- **kv** is **128**
It has **307.87** million parameters and thus requires *ca.* **1231.47 MB** of memory in full precision (*fp32*)
or **615.73 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-kv128
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-KV128 (Deep-Narrow version)
=============================================
T5-Efficient-BASE-KV128 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-kv128 - is of model type Base with the following variations:
* kv is 128
It has 307.87 million parameters and thus requires *ca.* 1231.47 MB of memory in full precision (*fp32*)
or 615.73 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-KV16 (Deep-Narrow version)
T5-Efficient-BASE-KV16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-kv16** - is of model type **Base** with the following variations:
- **kv** is **16**
It has **159.23** million parameters and thus requires *ca.* **636.92 MB** of memory in full precision (*fp32*)
or **318.46 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-kv16
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-KV16 (Deep-Narrow version)
============================================
T5-Efficient-BASE-KV16 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-kv16 - is of model type Base with the following variations:
* kv is 16
It has 159.23 million parameters and thus requires *ca.* 636.92 MB of memory in full precision (*fp32*)
or 318.46 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-KV256 (Deep-Narrow version)
T5-Efficient-BASE-KV256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-kv256** - is of model type **Base** with the following variations:
- **kv** is **256**
It has **477.74** million parameters and thus requires *ca.* **1910.94 MB** of memory in full precision (*fp32*)
or **955.47 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-kv256
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-KV256 (Deep-Narrow version)
=============================================
T5-Efficient-BASE-KV256 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-kv256 - is of model type Base with the following variations:
* kv is 256
It has 477.74 million parameters and thus requires *ca.* 1910.94 MB of memory in full precision (*fp32*)
or 955.47 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-KV32 (Deep-Narrow version)
T5-Efficient-BASE-KV32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-kv32** - is of model type **Base** with the following variations:
- **kv** is **32**
It has **180.46** million parameters and thus requires *ca.* **721.86 MB** of memory in full precision (*fp32*)
or **360.93 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-kv32
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-KV32 (Deep-Narrow version)
============================================
T5-Efficient-BASE-KV32 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-kv32 - is of model type Base with the following variations:
* kv is 32
It has 180.46 million parameters and thus requires *ca.* 721.86 MB of memory in full precision (*fp32*)
or 360.93 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NH16 (Deep-Narrow version)
T5-Efficient-BASE-NH16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nh16** - is of model type **Base** with the following variations:
- **nh** is **16**
It has **251.24** million parameters and thus requires *ca.* **1004.97 MB** of memory in full precision (*fp32*)
or **502.49 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nh16
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NH16 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NH16 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nh16 - is of model type Base with the following variations:
* nh is 16
It has 251.24 million parameters and thus requires *ca.* 1004.97 MB of memory in full precision (*fp32*)
or 502.49 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NH24 (Deep-Narrow version)
T5-Efficient-BASE-NH24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nh24** - is of model type **Base** with the following variations:
- **nh** is **24**
It has **307.87** million parameters and thus requires *ca.* **1231.47 MB** of memory in full precision (*fp32*)
or **615.73 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nh24
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NH24 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NH24 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nh24 - is of model type Base with the following variations:
* nh is 24
It has 307.87 million parameters and thus requires *ca.* 1231.47 MB of memory in full precision (*fp32*)
or 615.73 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NH32 (Deep-Narrow version)
T5-Efficient-BASE-NH32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nh32** - is of model type **Base** with the following variations:
- **nh** is **32**
It has **364.49** million parameters and thus requires *ca.* **1457.96 MB** of memory in full precision (*fp32*)
or **728.98 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nh32
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NH32 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NH32 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nh32 - is of model type Base with the following variations:
* nh is 32
It has 364.49 million parameters and thus requires *ca.* 1457.96 MB of memory in full precision (*fp32*)
or 728.98 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NH8 (Deep-Narrow version)
T5-Efficient-BASE-NH8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nh8** - is of model type **Base** with the following variations:
- **nh** is **8**
It has **194.62** million parameters and thus requires *ca.* **778.48 MB** of memory in full precision (*fp32*)
or **389.24 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nh8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NH8 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-NH8 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nh8 - is of model type Base with the following variations:
* nh is 8
It has 194.62 million parameters and thus requires *ca.* 778.48 MB of memory in full precision (*fp32*)
or 389.24 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL16 (Deep-Narrow version)
T5-Efficient-BASE-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl16** - is of model type **Base** with the following variations:
- **nl** is **16**
It has **289.02** million parameters and thus requires *ca.* **1156.07 MB** of memory in full precision (*fp32*)
or **578.03 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl16
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL16 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL16 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl16 - is of model type Base with the following variations:
* nl is 16
It has 289.02 million parameters and thus requires *ca.* 1156.07 MB of memory in full precision (*fp32*)
or 578.03 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL2 (Deep-Narrow version)
T5-Efficient-BASE-NL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl2** - is of model type **Base** with the following variations:
- **nl** is **2**
It has **57.72** million parameters and thus requires *ca.* **230.88 MB** of memory in full precision (*fp32*)
or **115.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL2 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-NL2 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl2 - is of model type Base with the following variations:
* nl is 2
It has 57.72 million parameters and thus requires *ca.* 230.88 MB of memory in full precision (*fp32*)
or 115.44 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL24 (Deep-Narrow version)
T5-Efficient-BASE-NL24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl24** - is of model type **Base** with the following variations:
- **nl** is **24**
It has **421.19** million parameters and thus requires *ca.* **1684.75 MB** of memory in full precision (*fp32*)
or **842.37 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl24
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL24 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL24 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl24 - is of model type Base with the following variations:
* nl is 24
It has 421.19 million parameters and thus requires *ca.* 1684.75 MB of memory in full precision (*fp32*)
or 842.37 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL32 (Deep-Narrow version)
T5-Efficient-BASE-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl32** - is of model type **Base** with the following variations:
- **nl** is **32**
It has **553.36** million parameters and thus requires *ca.* **2213.43 MB** of memory in full precision (*fp32*)
or **1106.71 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl32
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL32 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL32 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl32 - is of model type Base with the following variations:
* nl is 32
It has 553.36 million parameters and thus requires *ca.* 2213.43 MB of memory in full precision (*fp32*)
or 1106.71 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL36 (Deep-Narrow version)
T5-Efficient-BASE-NL36 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl36** - is of model type **Base** with the following variations:
- **nl** is **36**
It has **619.44** million parameters and thus requires *ca.* **2477.77 MB** of memory in full precision (*fp32*)
or **1238.88 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl36
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL36 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL36 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl36 - is of model type Base with the following variations:
* nl is 36
It has 619.44 million parameters and thus requires *ca.* 2477.77 MB of memory in full precision (*fp32*)
or 1238.88 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL4 (Deep-Narrow version)
T5-Efficient-BASE-NL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl4** - is of model type **Base** with the following variations:
- **nl** is **4**
It has **90.76** million parameters and thus requires *ca.* **363.05 MB** of memory in full precision (*fp32*)
or **181.52 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl4
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
T5-Efficient-BASE-NL4 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-NL4 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl4 - is of model type Base with the following variations:
* nl is 4
It has 90.76 million parameters and thus requires *ca.* 363.05 MB of memory in full precision (*fp32*)
or 181.52 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL40 (Deep-Narrow version)
T5-Efficient-BASE-NL40 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl40** - is of model type **Base** with the following variations:
- **nl** is **40**
It has **685.53** million parameters and thus requires *ca.* **2742.11 MB** of memory in full precision (*fp32*)
or **1371.05 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl40
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-NL40 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL40 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl40 - is of model type Base with the following variations:
* nl is 40
It has 685.53 million parameters and thus requires *ca.* 2742.11 MB of memory in full precision (*fp32*)
or 1371.05 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL48 (Deep-Narrow version)
T5-Efficient-BASE-NL48 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl48** - is of model type **Base** with the following variations:
- **nl** is **48**
It has **817.7** million parameters and thus requires *ca.* **3270.79 MB** of memory in full precision (*fp32*)
or **1635.39 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl48
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-NL48 (Deep-Narrow version)
============================================
T5-Efficient-BASE-NL48 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl48 - is of model type Base with the following variations:
* nl is 48
It has 817.7 million parameters and thus requires *ca.* 3270.79 MB of memory in full precision (*fp32*)
or 1635.39 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE-NL8 (Deep-Narrow version)
T5-Efficient-BASE-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl8** - is of model type **Base** with the following variations:
- **nl** is **8**
It has **156.85** million parameters and thus requires *ca.* **627.39 MB** of memory in full precision (*fp32*)
or **313.69 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base-nl8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE-NL8 (Deep-Narrow version)
===========================================
T5-Efficient-BASE-NL8 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base-nl8 - is of model type Base with the following variations:
* nl is 8
It has 156.85 million parameters and thus requires *ca.* 627.39 MB of memory in full precision (*fp32*)
or 313.69 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-BASE (Deep-Narrow version)
T5-Efficient-BASE is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base** - is of model type **Base** with no variations.
It has **222.93** million parameters and thus requires *ca.* **891.73 MB** of memory in full precision (*fp32*)
or **445.86 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-base
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-BASE (Deep-Narrow version)
=======================================
T5-Efficient-BASE is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-base - is of model type Base with no variations.
It has 222.93 million parameters and thus requires *ca.* 891.73 MB of memory in full precision (*fp32*)
or 445.86 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL12 (Deep-Narrow version)
T5-Efficient-LARGE-DL12 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl12** - is of model type **Large** with the following variations:
- **dl** is **12**
It has **536.34** million parameters and thus requires *ca.* **2145.37 MB** of memory in full precision (*fp32*)
or **1072.69 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl12
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL12 (Deep-Narrow version)
=============================================
T5-Efficient-LARGE-DL12 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl12 - is of model type Large with the following variations:
* dl is 12
It has 536.34 million parameters and thus requires *ca.* 2145.37 MB of memory in full precision (*fp32*)
or 1072.69 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL16 (Deep-Narrow version)
T5-Efficient-LARGE-DL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl16** - is of model type **Large** with the following variations:
- **dl** is **16**
It has **603.47** million parameters and thus requires *ca.* **2413.88 MB** of memory in full precision (*fp32*)
or **1206.94 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl16
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL16 (Deep-Narrow version)
=============================================
T5-Efficient-LARGE-DL16 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl16 - is of model type Large with the following variations:
* dl is 16
It has 603.47 million parameters and thus requires *ca.* 2413.88 MB of memory in full precision (*fp32*)
or 1206.94 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL2 (Deep-Narrow version)
T5-Efficient-LARGE-DL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl2** - is of model type **Large** with the following variations:
- **dl** is **2**
It has **368.53** million parameters and thus requires *ca.* **1474.11 MB** of memory in full precision (*fp32*)
or **737.05 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL2 (Deep-Narrow version)
============================================
T5-Efficient-LARGE-DL2 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl2 - is of model type Large with the following variations:
* dl is 2
It has 368.53 million parameters and thus requires *ca.* 1474.11 MB of memory in full precision (*fp32*)
or 737.05 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL32 (Deep-Narrow version)
T5-Efficient-LARGE-DL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl32** - is of model type **Large** with the following variations:
- **dl** is **32**
It has **871.98** million parameters and thus requires *ca.* **3487.91 MB** of memory in full precision (*fp32*)
or **1743.96 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl32
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL32 (Deep-Narrow version)
=============================================
T5-Efficient-LARGE-DL32 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl32 - is of model type Large with the following variations:
* dl is 32
It has 871.98 million parameters and thus requires *ca.* 3487.91 MB of memory in full precision (*fp32*)
or 1743.96 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL4 (Deep-Narrow version)
T5-Efficient-LARGE-DL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl4** - is of model type **Large** with the following variations:
- **dl** is **4**
It has **402.09** million parameters and thus requires *ca.* **1608.36 MB** of memory in full precision (*fp32*)
or **804.18 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl4
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL4 (Deep-Narrow version)
============================================
T5-Efficient-LARGE-DL4 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl4 - is of model type Large with the following variations:
* dl is 4
It has 402.09 million parameters and thus requires *ca.* 1608.36 MB of memory in full precision (*fp32*)
or 804.18 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL6 (Deep-Narrow version)
T5-Efficient-LARGE-DL6 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl6** - is of model type **Large** with the following variations:
- **dl** is **6**
It has **435.65** million parameters and thus requires *ca.* **1742.61 MB** of memory in full precision (*fp32*)
or **871.31 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl6
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL6 (Deep-Narrow version)
============================================
T5-Efficient-LARGE-DL6 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl6 - is of model type Large with the following variations:
* dl is 6
It has 435.65 million parameters and thus requires *ca.* 1742.61 MB of memory in full precision (*fp32*)
or 871.31 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DL8 (Deep-Narrow version)
T5-Efficient-LARGE-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dl8** - is of model type **Large** with the following variations:
- **dl** is **8**
It has **469.22** million parameters and thus requires *ca.* **1876.87 MB** of memory in full precision (*fp32*)
or **938.43 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dl8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DL8 (Deep-Narrow version)
============================================
T5-Efficient-LARGE-DL8 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dl8 - is of model type Large with the following variations:
* dl is 8
It has 469.22 million parameters and thus requires *ca.* 1876.87 MB of memory in full precision (*fp32*)
or 938.43 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DM128 (Deep-Narrow version)
T5-Efficient-LARGE-DM128 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dm128** - is of model type **Large** with the following variations:
- **dm** is **128**
It has **92.27** million parameters and thus requires *ca.* **369.06 MB** of memory in full precision (*fp32*)
or **184.53 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dm128
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DM128 (Deep-Narrow version)
==============================================
T5-Efficient-LARGE-DM128 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dm128 - is of model type Large with the following variations:
* dm is 128
It has 92.27 million parameters and thus requires *ca.* 369.06 MB of memory in full precision (*fp32*)
or 184.53 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# T5-Efficient-LARGE-DM2000 (Deep-Narrow version)
T5-Efficient-LARGE-DM2000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-dm2000** - is of model type **Large** with the following variations:
- **dm** is **2000**
It has **1475.39** million parameters and thus requires *ca.* **5901.57 MB** of memory in full precision (*fp32*)
or **2950.78 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "datasets": ["c4"], "inference": false}
|
google/t5-efficient-large-dm2000
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.10686"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
|
T5-Efficient-LARGE-DM2000 (Deep-Narrow version)
===============================================
T5-Efficient-LARGE-DM2000 is a variation of Google's original T5 following the T5 model architecture.
It is a *pretrained-only* checkpoint and was released with the
paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures
of similar parameter count.
To quote the paper:
>
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
>
>
>
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
Details model architecture
--------------------------
This model checkpoint - t5-efficient-large-dm2000 - is of model type Large with the following variations:
* dm is 2000
It has 1475.39 million parameters and thus requires *ca.* 5901.57 MB of memory in full precision (*fp32*)
or 2950.78 MB of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
whereas the following abbreviations are used:
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
Pre-Training
------------
The checkpoint was pretrained on the Colossal, Cleaned version of Common Crawl (C4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
Fine-Tuning
-----------
Note: This model is a pretrained checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
* Summarization
* Question Answering
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
* Summarization
* Text Classification - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
Downstream Performance
----------------------
TODO: Add table if available
Computational Complexity
------------------------
TODO: Add table if available
More information
----------------
We strongly recommend the reader to go carefully through the original paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers to get a more nuanced understanding of this model checkpoint.
As explained in the following issue, checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept here as they might be ported potentially in the future.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #deep-narrow #en #dataset-c4 #arxiv-2109.10686 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.