pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-12_H-768_A-12
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-2_H-128_A-2
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #safetensors #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-2_H-256_A-4
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-2_H-512_A-8
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-2_H-768_A-12
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-4_H-128_A-2
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-4_H-256_A-4
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-4_H-512_A-8
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-4_H-768_A-12
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-6_H-128_A-2
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-6_H-256_A-4
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-6_H-512_A-8
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-6_H-768_A-12
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-8_H-128_A-2
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-8_H-256_A-4
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-8_H-512_A-8
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n" ]
null
transformers
BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
{"license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/bert_uncased_L-8_H-768_A-12
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.08962" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us
BERT Miniatures =============== This is the set of 24 BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the official BERT Github page, or via HuggingFace from the links below: Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: * batch sizes: 8, 16, 32, 64, 128 * learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n" ]
question-answering
transformers
# BigBird base trivia-itc This model is a fine-tune checkpoint of `bigbird-roberta-base`, fine-tuned on `trivia_qa` with `BigBirdForQuestionAnsweringHead` on its top. Check out [this](https://colab.research.google.com/drive/1DVOm1VHjW0eKCayFq1N2GpY6GR9M4tJP?usp=sharing) to see how well `google/bigbird-base-trivia-itc` performs on question answering. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdForQuestionAnswering # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc") # you can change `attention_type` to full attention like this: model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", block_size=16, num_random_blocks=2) question = "Replace me by any text you'd like." context = "Put some context for answering" encoded_input = tokenizer(question, context, return_tensors='pt') output = model(**encoded_input) ``` # Fine-tuning config & hyper-parameters - No. of global token = 128 - Window length = 192 - No. of random token = 192 - Max. sequence length = 4096 - No. of heads = 12 - No. of hidden layers = 12 - Hidden layer size = 768 - Batch size = 32 - Loss = cross-entropy noisy spans ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "datasets": ["trivia_qa"]}
google/bigbird-base-trivia-itc
null
[ "transformers", "pytorch", "jax", "big_bird", "question-answering", "en", "dataset:trivia_qa", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #jax #big_bird #question-answering #en #dataset-trivia_qa #arxiv-2007.14062 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# BigBird base trivia-itc This model is a fine-tune checkpoint of 'bigbird-roberta-base', fine-tuned on 'trivia_qa' with 'BigBirdForQuestionAnsweringHead' on its top. Check out this to see how well 'google/bigbird-base-trivia-itc' performs on question answering. ## How to use Here is how to use this model to get the features of a given text in PyTorch: # Fine-tuning config & hyper-parameters - No. of global token = 128 - Window length = 192 - No. of random token = 192 - Max. sequence length = 4096 - No. of heads = 12 - No. of hidden layers = 12 - Hidden layer size = 768 - Batch size = 32 - Loss = cross-entropy noisy spans ## BibTeX entry and citation info
[ "# BigBird base trivia-itc\n\nThis model is a fine-tune checkpoint of 'bigbird-roberta-base', fine-tuned on 'trivia_qa' with 'BigBirdForQuestionAnsweringHead' on its top.\n\nCheck out this to see how well 'google/bigbird-base-trivia-itc' performs on question answering.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "# Fine-tuning config & hyper-parameters\n\n- No. of global token = 128\n- Window length = 192\n- No. of random token = 192\n- Max. sequence length = 4096\n- No. of heads = 12\n- No. of hidden layers = 12\n- Hidden layer size = 768\n- Batch size = 32\n- Loss = cross-entropy noisy spans", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #jax #big_bird #question-answering #en #dataset-trivia_qa #arxiv-2007.14062 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# BigBird base trivia-itc\n\nThis model is a fine-tune checkpoint of 'bigbird-roberta-base', fine-tuned on 'trivia_qa' with 'BigBirdForQuestionAnsweringHead' on its top.\n\nCheck out this to see how well 'google/bigbird-base-trivia-itc' performs on question answering.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "# Fine-tuning config & hyper-parameters\n\n- No. of global token = 128\n- Window length = 192\n- No. of random token = 192\n- Max. sequence length = 4096\n- No. of heads = 12\n- No. of hidden layers = 12\n- Hidden layer size = 768\n- Batch size = 32\n- Loss = cross-entropy noisy spans", "## BibTeX entry and citation info" ]
summarization
transformers
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on **arxiv dataset** from [scientific_papers](https://huggingface.co/datasets/scientific_papers). ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["scientific_papers"], "model-index": [{"name": "google/bigbird-pegasus-large-arxiv", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "scientific_papers", "type": "scientific_papers", "config": "pubmed", "split": "test"}, "metrics": [{"type": "rouge", "value": 36.0276, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 13.4166, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 21.9612, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 29.648, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.774355173110962, "name": "loss", "verified": true}, {"type": "meteor", "value": 0.2824, "name": "meteor", "verified": true}, {"type": "gen_len", "value": 209.2537, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "config": "3.0.0", "split": "test"}, "metrics": [{"type": "rouge", "value": 9.0885, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 1.0325, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 7.3182, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 8.1455, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": NaN, "name": "loss", "verified": true}, {"type": "gen_len", "value": 210.4762, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 4.9787, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 0.3527, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 4.3679, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 4.1723, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": NaN, "name": "loss", "verified": true}, {"type": "gen_len", "value": 230.4886, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "scientific_papers", "type": "scientific_papers", "config": "arxiv", "split": "test"}, "metrics": [{"type": "rouge", "value": 43.4702, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 17.4297, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 26.2587, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 35.5587, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.1113228797912598, "name": "loss", "verified": true}, {"type": "gen_len", "value": 183.3702, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "test"}, "metrics": [{"type": "rouge", "value": 3.621, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 0.1699, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 3.2016, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 3.3269, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 7.664482116699219, "name": "loss", "verified": true}, {"type": "gen_len", "value": 233.8107, "name": "gen_len", "verified": true}]}]}]}
google/bigbird-pegasus-large-arxiv
null
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "summarization", "en", "dataset:scientific_papers", "arxiv:2007.14062", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-scientific_papers #arxiv-2007.14062 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this paper and first released in this repository. Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ## Training Procedure This checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on arxiv dataset from scientific_papers. ## BibTeX entry and citation info
[ "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on arxiv dataset from scientific_papers.", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-scientific_papers #arxiv-2007.14062 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on arxiv dataset from scientific_papers.", "## BibTeX entry and citation info" ]
summarization
transformers
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-bigpatent") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on [big_patent](https://huggingface.co/datasets/big_patent) dataset. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["big_patent"]}
google/bigbird-pegasus-large-bigpatent
null
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "summarization", "en", "dataset:big_patent", "arxiv:2007.14062", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-big_patent #arxiv-2007.14062 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this paper and first released in this repository. Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ## Training Procedure This checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on big_patent dataset. ## BibTeX entry and citation info
[ "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on big_patent dataset.", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-big_patent #arxiv-2007.14062 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on big_patent dataset.", "## BibTeX entry and citation info" ]
summarization
transformers
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-pubmed") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on **pubmed dataset** from [scientific_papers](https://huggingface.co/datasets/scientific_papers). ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "tags": ["summarization"], "datasets": ["scientific_papers"], "model-index": [{"name": "google/bigbird-pegasus-large-pubmed", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "scientific_papers", "type": "scientific_papers", "config": "pubmed", "split": "test"}, "metrics": [{"type": "rouge", "value": 40.8966, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 18.1161, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 26.1743, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 34.2773, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.1707184314727783, "name": "loss", "verified": true}, {"type": "meteor", "value": 0.3513, "name": "meteor", "verified": true}, {"type": "gen_len", "value": 221.2531, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "scientific_papers", "type": "scientific_papers", "config": "arxiv", "split": "test"}, "metrics": [{"type": "rouge", "value": 40.3815, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 14.374, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 23.4773, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 33.772, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 3.235051393508911, "name": "loss", "verified": true}, {"type": "gen_len", "value": 186.2003, "name": "gen_len", "verified": true}]}]}]}
google/bigbird-pegasus-large-pubmed
null
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "summarization", "en", "dataset:scientific_papers", "arxiv:2007.14062", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-scientific_papers #arxiv-2007.14062 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this paper and first released in this repository. Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ## Training Procedure This checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on pubmed dataset from scientific_papers. ## BibTeX entry and citation info
[ "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on pubmed dataset from scientific_papers.", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #bigbird_pegasus #text2text-generation #summarization #en #dataset-scientific_papers #arxiv-2007.14062 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BigBirdPegasus model (large)\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. \n\nBigBird was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Procedure\n\nThis checkpoint is obtained after fine-tuning 'BigBirdPegasusForConditionalGeneration' for summarization on pubmed dataset from scientific_papers.", "## BibTeX entry and citation info" ]
null
transformers
# BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "cc_news"]}
google/bigbird-roberta-base
null
[ "transformers", "pytorch", "jax", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #jax #big_bird #pretraining #en #dataset-bookcorpus #dataset-wikipedia #dataset-cc_news #arxiv-2007.14062 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ## Training Data This model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info
[ "# BigBird base model\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.\n\nIt is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Data\n\nThis model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).", "## Training Procedure\n\nDocument longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.\n\nModel is warm started from RoBERTa’s checkpoint.", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #jax #big_bird #pretraining #en #dataset-bookcorpus #dataset-wikipedia #dataset-cc_news #arxiv-2007.14062 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# BigBird base model\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.\n\nIt is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Data\n\nThis model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).", "## Training Procedure\n\nDocument longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.\n\nModel is warm started from RoBERTa’s checkpoint.", "## BibTeX entry and citation info" ]
fill-mask
transformers
# BigBird large model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-large") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "cc_news"]}
google/bigbird-roberta-large
null
[ "transformers", "pytorch", "jax", "big_bird", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.14062" ]
[ "en" ]
TAGS #transformers #pytorch #jax #big_bird #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #dataset-cc_news #arxiv-2007.14062 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# BigBird large model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ## Training Data This model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info
[ "# BigBird large model\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.\n\nIt is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Data\n\nThis model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).", "## Training Procedure\n\nDocument longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.\n\nModel is warm started from RoBERTa’s checkpoint.", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #jax #big_bird #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #dataset-cc_news #arxiv-2007.14062 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# BigBird large model\n\nBigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.\n\nIt is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository.\n\nDisclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.", "## How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:", "## Training Data\n\nThis model is pre-trained on four publicly available datasets: Books, CC-News, Stories and Wikipedia. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).", "## Training Procedure\n\nDocument longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.\n\nModel is warm started from RoBERTa’s checkpoint.", "## BibTeX entry and citation info" ]
text2text-generation
transformers
# ByT5 - Base ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') tokenizer = AutoTokenizer.from_pretrained('google/byt5-base') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/byt5-base
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06292", "2105.13626" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ByT5 - Base ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5. ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA. Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: For batched inference & training it is however recommended using a tokenizer class for padding: ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. !model image
[ "# ByT5 - Base\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
[ "TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ByT5 - Base\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
text2text-generation
transformers
# ByT5 - large ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-large). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-large` significantly outperforms [mt5-large](https://huggingface.co/google/mt5-large) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-large') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-large') tokenizer = AutoTokenizer.from_pretrained('google/byt5-large') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/byt5-large
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06292", "2105.13626" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ByT5 - large ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5. ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-large' significantly outperforms mt5-large on TweetQA. Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: For batched inference & training it is however recommended using a tokenizer class for padding: ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. !model image
[ "# ByT5 - large\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-large' significantly outperforms mt5-large on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
[ "TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ByT5 - large\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-large' significantly outperforms mt5-large on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
text2text-generation
transformers
# ByT5 - Small ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-small') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-small') tokenizer = AutoTokenizer.from_pretrained('google/byt5-small') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/byt5-small
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06292", "2105.13626" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ByT5 - Small ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5. ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-small' significantly outperforms mt5-small on TweetQA. Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: For batched inference & training it is however recommended using a tokenizer class for padding: ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. !model image
[ "# ByT5 - Small\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-small' significantly outperforms mt5-small on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
[ "TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ByT5 - Small\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-small' significantly outperforms mt5-small on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
text2text-generation
transformers
# ByT5 - xl ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-xl). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-xl` significantly outperforms [mt5-xl](https://huggingface.co/google/mt5-xl) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-xl') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-xl') tokenizer = AutoTokenizer.from_pretrained('google/byt5-xl') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/byt5-xl
null
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06292", "2105.13626" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ByT5 - xl ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5. ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xl' significantly outperforms mt5-xl on TweetQA. Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: For batched inference & training it is however recommended using a tokenizer class for padding: ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. !model image
[ "# ByT5 - xl\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xl' significantly outperforms mt5-xl on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
[ "TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ByT5 - xl\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xl' significantly outperforms mt5-xl on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
text2text-generation
transformers
# ByT5 - xxl ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-xxl). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-xxl` significantly outperforms [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-xxl') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-xxl') tokenizer = AutoTokenizer.from_pretrained('google/byt5-xxl') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/byt5-xxl
null
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06292", "2105.13626" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# ByT5 - xxl ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5. ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xxl' significantly outperforms mt5-xxl on TweetQA. Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: For batched inference & training it is however recommended using a tokenizer class for padding: ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. !model image
[ "# ByT5 - xxl\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xxl' significantly outperforms mt5-xxl on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
[ "TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# ByT5 - xxl\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-xxl' significantly outperforms mt5-xxl on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*", "## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:", "## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image" ]
feature-extraction
transformers
# CANINE-c (CANINE pre-trained with autoregressive character loss) Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) and first released in [this repository](https://github.com/google-research/language/tree/master/language/canine). What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its [Unicode code point](https://en.wikipedia.org/wiki/Code_point#:~:text=For%20Unicode%2C%20the%20particular%20sequence,forming%20a%20self%2Dsynchronizing%20code.). This means that input processing is trivial and can typically be accomplished as: ``` input_ids = [ord(char) for char in text] ``` The ord() function is part of Python, and turns each character into its Unicode code point. Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts. * Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=canine) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2. ### How to use Here is how to use this model: ```python from transformers import CanineTokenizer, CanineModel model = CanineModel.from_pretrained('google/canine-c') tokenizer = CanineTokenizer.from_pretrained('google/canine-c') inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") outputs = model(**encoding) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state ``` ## Training data The CANINE model was pretrained on on the multilingual Wikipedia data of [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md), which includes 104 languages. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-06874, author = {Jonathan H. Clark and Dan Garrette and Iulia Turc and John Wieting}, title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for Language Representation}, journal = {CoRR}, volume = {abs/2103.06874}, year = {2021}, url = {https://arxiv.org/abs/2103.06874}, archivePrefix = {arXiv}, eprint = {2103.06874}, timestamp = {Tue, 16 Mar 2021 11:26:59 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google/canine-c
null
[ "transformers", "pytorch", "canine", "feature-extraction", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2103.06874", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2103.06874" ]
[ "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo" ]
TAGS #transformers #pytorch #canine #feature-extraction #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-bookcorpus #dataset-wikipedia #arxiv-2103.06874 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# CANINE-c (CANINE pre-trained with autoregressive character loss) Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. This means that input processing is trivial and can typically be accomplished as: The ord() function is part of Python, and turns each character into its Unicode code point. Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts. * Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2. ### How to use Here is how to use this model: ## Training data The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages. ### BibTeX entry and citation info
[ "# CANINE-c (CANINE pre-trained with autoregressive character loss) \n\nPretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. \n\nWhat's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. \n\nThis means that input processing is trivial and can typically be accomplished as: \n\n\n\nThe ord() function is part of Python, and turns each character into its Unicode code point.\n\nDisclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nCANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:\n\n* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts.\n* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.", "## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #canine #feature-extraction #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-bookcorpus #dataset-wikipedia #arxiv-2103.06874 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# CANINE-c (CANINE pre-trained with autoregressive character loss) \n\nPretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. \n\nWhat's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. \n\nThis means that input processing is trivial and can typically be accomplished as: \n\n\n\nThe ord() function is part of Python, and turns each character into its Unicode code point.\n\nDisclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nCANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:\n\n* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts.\n* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.", "## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.", "### BibTeX entry and citation info" ]
feature-extraction
transformers
# CANINE-s (CANINE pre-trained with subword loss) Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) and first released in [this repository](https://github.com/google-research/language/tree/master/language/canine). What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its [Unicode code point](https://en.wikipedia.org/wiki/Code_point#:~:text=For%20Unicode%2C%20the%20particular%20sequence,forming%20a%20self%2Dsynchronizing%20code.). This means that input processing is trivial and can typically be accomplished as: ``` input_ids = [ord(char) for char in text] ``` The ord() function is part of Python, and turns each character into its Unicode code point. Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE. * Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=canine) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2. ### How to use Here is how to use this model: ```python from transformers import CanineTokenizer, CanineModel model = CanineModel.from_pretrained('google/canine-s') tokenizer = CanineTokenizer.from_pretrained('google/canine-s') inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") outputs = model(**encoding) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state ``` ## Training data The CANINE model was pretrained on on the multilingual Wikipedia data of [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md), which includes 104 languages. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-06874, author = {Jonathan H. Clark and Dan Garrette and Iulia Turc and John Wieting}, title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for Language Representation}, journal = {CoRR}, volume = {abs/2103.06874}, year = {2021}, url = {https://arxiv.org/abs/2103.06874}, archivePrefix = {arXiv}, eprint = {2103.06874}, timestamp = {Tue, 16 Mar 2021 11:26:59 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
google/canine-s
null
[ "transformers", "pytorch", "canine", "feature-extraction", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2103.06874", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2103.06874" ]
[ "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo" ]
TAGS #transformers #pytorch #canine #feature-extraction #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-bookcorpus #dataset-wikipedia #arxiv-2103.06874 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# CANINE-s (CANINE pre-trained with subword loss) Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. This means that input processing is trivial and can typically be accomplished as: The ord() function is part of Python, and turns each character into its Unicode code point. Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE. * Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2. ### How to use Here is how to use this model: ## Training data The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages. ### BibTeX entry and citation info
[ "# CANINE-s (CANINE pre-trained with subword loss) \n\nPretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. \n\nWhat's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. \n\nThis means that input processing is trivial and can typically be accomplished as: \n\n\n\nThe ord() function is part of Python, and turns each character into its Unicode code point.\n\nDisclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nCANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:\n\n* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.\n* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.", "## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #canine #feature-extraction #multilingual #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #mn #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #th #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-bookcorpus #dataset-wikipedia #arxiv-2103.06874 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# CANINE-s (CANINE pre-trained with subword loss) \n\nPretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository. \n\nWhat's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point. \n\nThis means that input processing is trivial and can typically be accomplished as: \n\n\n\nThe ord() function is part of Python, and turns each character into its Unicode code point.\n\nDisclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nCANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:\n\n* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.\n* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.", "## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.", "### BibTeX entry and citation info" ]
null
transformers
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-base-discriminator
null
[ "transformers", "pytorch", "tf", "jax", "rust", "electra", "pretraining", "en", "arxiv:1406.2661", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #has_space #region-us
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the discriminator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
fill-mask
transformers
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/electra-base-generator", tokenizer="google/electra-base-generator" ) print( fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-base-generator
null
[ "transformers", "pytorch", "tf", "jax", "rust", "electra", "fill-mask", "en", "arxiv:1406.2661", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the generator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
null
transformers
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-large-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-large-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-large-discriminator
null
[ "transformers", "pytorch", "tf", "jax", "electra", "pretraining", "en", "arxiv:1406.2661", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #has_space #region-us
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the discriminator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
fill-mask
transformers
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/electra-large-generator", tokenizer="google/electra-large-generator" ) print( fill_mask(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-large-generator
null
[ "transformers", "pytorch", "tf", "jax", "electra", "fill-mask", "en", "arxiv:1406.2661", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the generator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
null
transformers
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-small-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-small-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()] ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-small-discriminator
null
[ "transformers", "pytorch", "tf", "jax", "electra", "pretraining", "en", "arxiv:1406.2661", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #region-us
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the discriminator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #electra #pretraining #en #arxiv-1406.2661 #license-apache-2.0 #endpoints_compatible #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the discriminator in 'transformers'" ]
fill-mask
transformers
**WARNING**: This is the official generator checkpoint as in the [ELECTRA original codebase](https://github.com/google-research/electra). However, this model is not scaled properly for pre-training with [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator). The paper recommends a hyperparameter multiplier of 1/4 between the discriminator and generator for this given model to avoid training instabilities. This would not be the case when using `google/electra-small-generator` and `google/electra-small-discriminator`, which are similar in size. ## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/electra-small-generator", tokenizer="google/electra-small-generator" ) print( fill_mask(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/electra-small-generator
null
[ "transformers", "pytorch", "tf", "jax", "electra", "fill-mask", "en", "arxiv:1406.2661", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
WARNING: This is the official generator checkpoint as in the ELECTRA original codebase. However, this model is not scaled properly for pre-training with google/electra-small-discriminator. The paper recommends a hyperparameter multiplier of 1/4 between the discriminator and generator for this given model to avoid training instabilities. This would not be the case when using 'google/electra-small-generator' and 'google/electra-small-discriminator', which are similar in size. ## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking). ## How to use the generator in 'transformers'
[ "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #jax #electra #fill-mask #en #arxiv-1406.2661 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators\n\nELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nFor a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.\n\nThis repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).", "## How to use the generator in 'transformers'" ]
null
transformers
# FNet base model Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/google-research/tree/master/f_net). This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective. Disclaimer: This model card has been written by [gchhablani](https://huggingface.co/gchhablani). ## Model description FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an `attention_mask`. It is pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FNet model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fnet) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Training data The FNet model was pretrained on [C4](https://huggingface.co/datasets/c4), a cleaned version of the Common Crawl dataset. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining FNet-base was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results FNet-base was fine-tuned and evaluated on the validation data of the [GLUE benchamrk](https://huggingface.co/datasets/glue). The results of the official model (written in Flax) can be seen in Table 1 on page 7 of [the official paper](https://arxiv.org/abs/2105.03824). For comparison, this model (ported to PyTorch) was fine-tuned and evaluated using the [official Hugging Face GLUE evaluation scripts](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#glue-tasks) alongside [bert-base-cased](https://hf.co/models/bert-base-cased) for comparison. The training was done on a single 16GB NVIDIA Tesla V100 GPU. For MRPC/WNLI, the models were trained for 5 epochs, while for other tasks, the models were trained for 3 epochs. A sequence length of 512 was used with batch size 16 and learning rate 2e-5. The following table summarizes the results for [fnet-base](https://huggingface.co/google/fnet-base) (called *FNet (PyTorch) - Reproduced*) and [bert-base-cased](https://hf.co/models/bert-base-cased) (called *Bert (PyTorch) - Reproduced*) in terms of **fine-tuning** speed. The format is *hour:min:seconds*. **Note** that the authors compared **pre-traning** speed in [the official paper](https://arxiv.org/abs/2105.03824) instead. | Task/Model | FNet-base (PyTorch) |Bert-base (PyTorch)| |:----:|:-----------:|:----:| | MNLI-(m/mm) | [06:40:55](https://huggingface.co/gchhablani/fnet-base-finetuned-mnli) | [09:52:33](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli)| | QQP | [06:21:16](https://huggingface.co/gchhablani/fnet-base-finetuned-qqp) | [09:25:01](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qqp) | | QNLI | [01:48:22](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | [02:40:22](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli)| | SST-2 | [01:09:27](https://huggingface.co/gchhablani/fnet-base-finetuned-sst2) | [01:42:17](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2)| | CoLA | [00:09:47](https://huggingface.co/gchhablani/fnet-base-finetuned-cola) | [00:14:20](https://huggingface.co/gchhablani/bert-base-cased-finetuned-cola)| | STS-B | [00:07:09](https://huggingface.co/gchhablani/fnet-base-finetuned-stsb) | [00:10:24](https://huggingface.co/gchhablani/bert-base-cased-finetuned-stsb)| | MRPC | [00:07:48](https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc) | [00:11:12](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mrpc)| | RTE | [00:03:24](https://huggingface.co/gchhablani/fnet-base-finetuned-rte) | [00:04:51](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte)| | WNLI | [00:02:37](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | [00:03:23](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli)| | SUM | 16:30:45 | 24:23:56 | On average the PyTorch version of FNet-base requires *ca.* 32% less time for GLUE fine-tuning on GPU. The following table summarizes the results for [fnet-base](https://huggingface.co/google/fnet-base) (called *FNet (PyTorch) - Reproduced*) and [bert-base-cased](https://hf.co/models/bert-base-cased) (called *Bert (PyTorch) - Reproduced*) in terms of performance and compares it to the reported performance of the official FNet-base model (called *FNet (Flax) - Official*). Note that the training hyperparameters of the reproduced models were not the same as the official model, so the performance may differ significantly for some tasks (for example: CoLA). | Task/Model | Metric | FNet-base (PyTorch) | Bert-base (PyTorch) | FNet-Base (Flax - official) | |:----:|:-----------:|:----:|:-----------:|:----:| | MNLI-(m/mm) | Accuracy or Match/Mismatch | [76.75](https://huggingface.co/gchhablani/fnet-base-finetuned-mnli) | [84.10](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli) | 72/73 | | QQP | mean(Accuracy,F1) | [86.5](https://huggingface.co/gchhablani/fnet-base-finetuned-qqp) | [89.26](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qqp) | 83 | | QNLI | Accuracy | [84.39](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | [90.99](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli) | 80 | | SST-2 | Accuracy | [89.45](https://huggingface.co/gchhablani/fnet-base-finetuned-sst2) | [92.32](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2) | 95 | | CoLA | Matthews corr or Accuracy | [35.94](https://huggingface.co/gchhablani/fnet-base-finetuned-cola) | [59.57](https://huggingface.co/gchhablani/bert-base-cased-finetuned-cola) | 69 | | STS-B | Spearman corr. | [82.19](https://huggingface.co/gchhablani/fnet-base-finetuned-stsb) | [88.98](https://huggingface.co/gchhablani/bert-base-cased-finetuned-stsb) | 79 | | MRPC | mean(F1/Accuracy) | [81.15](https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc) | [88.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mrpc) | 76 | | RTE | Accuracy | [62.82](https://huggingface.co/gchhablani/fnet-base-finetuned-rte) | [67.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte) | 63 | | WNLI | Accuracy | [54.93](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | [46.48](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli) | - | | Avg | - | 72.7 | 78.6 | 76.7 | We can see that FNet-base achieves around 93% of BERT-base's performance on average. For more details, please refer to the checkpoints linked with the scores. On overview of all fine-tuned checkpoints of the following table can be accessed [here](https://huggingface.co/models?other=fnet-bert-base-comparison). ### How to use You can use this model directly with a pipeline for masked language modeling: **Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].** ```python >>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline >>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForMaskedLM.from_pretrained("google/fnet-base") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker("Hello I'm a [MASK] model.") [ {"sequence": "hello i'm a new model.", "score": 0.12073223292827606, "token": 351, "token_str": "new"}, {"sequence": "hello i'm a first model.", "score": 0.08501081168651581, "token": 478, "token_str": "first"}, {"sequence": "hello i'm a next model.", "score": 0.060546260327100754, "token": 1037, "token_str": "next"}, {"sequence": "hello i'm a last model.", "score": 0.038265593349933624, "token": 813, "token_str": "last"}, {"sequence": "hello i'm a sister model.", "score": 0.033868927508592606, "token": 6232, "token_str": "sister"}, ] ``` Here is how to use this model to get the features of a given text in PyTorch: **Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.** ```python from transformers import FNetTokenizer, FNetModel tokenizer = FNetTokenizer.from_pretrained("google/fnet-base") model = FNetModel.from_pretrained("google/fnet-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True, max_length=512) output = model(**encoded_input) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-03824, author = {James Lee{-}Thorp and Joshua Ainslie and Ilya Eckstein and Santiago Onta{\~{n}}{\'{o}}n}, title = {FNet: Mixing Tokens with Fourier Transforms}, journal = {CoRR}, volume = {abs/2105.03824}, year = {2021}, url = {https://arxiv.org/abs/2105.03824}, archivePrefix = {arXiv}, eprint = {2105.03824}, timestamp = {Fri, 14 May 2021 12:13:30 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-03824.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ## Contributions Thanks to [@gchhablani](https://huggingface.co/gchhablani) for adding this model.
{"language": "en", "license": "apache-2.0", "tags": ["fnet"], "datasets": ["c4"]}
google/fnet-base
null
[ "transformers", "pytorch", "rust", "fnet", "pretraining", "en", "dataset:c4", "arxiv:2105.03824", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2105.03824" ]
[ "en" ]
TAGS #transformers #pytorch #rust #fnet #pretraining #en #dataset-c4 #arxiv-2105.03824 #license-apache-2.0 #endpoints_compatible #region-us
FNet base model =============== Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective. Disclaimer: This model card has been written by gchhablani. Model description ----------------- FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an 'attention\_mask'. It is pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. * Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FNet model as inputs. Intended uses & limitations --------------------------- You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. Training data ------------- The FNet model was pretrained on C4, a cleaned version of the Common Crawl dataset. Training procedure ------------------ ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: * 15% of the tokens are masked. * In 80% of the cases, the masked tokens are replaced by '[MASK]'. * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. * In the 10% remaining cases, the masked tokens are left as is. ### Pretraining FNet-base was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results ------------------ FNet-base was fine-tuned and evaluated on the validation data of the GLUE benchamrk. The results of the official model (written in Flax) can be seen in Table 1 on page 7 of the official paper. For comparison, this model (ported to PyTorch) was fine-tuned and evaluated using the official Hugging Face GLUE evaluation scripts alongside bert-base-cased for comparison. The training was done on a single 16GB NVIDIA Tesla V100 GPU. For MRPC/WNLI, the models were trained for 5 epochs, while for other tasks, the models were trained for 3 epochs. A sequence length of 512 was used with batch size 16 and learning rate 2e-5. The following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of fine-tuning speed. The format is *hour:min:seconds*. Note that the authors compared pre-traning speed in the official paper instead. On average the PyTorch version of FNet-base requires *ca.* 32% less time for GLUE fine-tuning on GPU. The following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of performance and compares it to the reported performance of the official FNet-base model (called *FNet (Flax) - Official*). Note that the training hyperparameters of the reproduced models were not the same as the official model, so the performance may differ significantly for some tasks (for example: CoLA). We can see that FNet-base achieves around 93% of BERT-base's performance on average. For more details, please refer to the checkpoints linked with the scores. On overview of all fine-tuned checkpoints of the following table can be accessed here. ### How to use You can use this model directly with a pipeline for masked language modeling: Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK]. Here is how to use this model to get the features of a given text in PyTorch: Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass. ### BibTeX entry and citation info Contributions ------------- Thanks to @gchhablani for adding this model.
[ "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nFNet-base was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 512 tokens. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nFNet-base was fine-tuned and evaluated on the validation data of the GLUE benchamrk. The results of the official model (written in Flax) can be seen in Table 1 on page 7 of the official paper.\n\n\nFor comparison, this model (ported to PyTorch) was fine-tuned and evaluated using the official Hugging Face GLUE evaluation scripts alongside bert-base-cased for comparison.\nThe training was done on a single 16GB NVIDIA Tesla V100 GPU. For MRPC/WNLI, the models were trained for 5 epochs, while for other tasks, the models were trained for 3 epochs. A sequence length of 512 was used with batch size 16 and learning rate 2e-5.\n\n\nThe following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of fine-tuning speed. The format is *hour:min:seconds*. Note that the authors compared pre-traning speed in the official paper instead.\n\n\n\nOn average the PyTorch version of FNet-base requires *ca.* 32% less time for GLUE fine-tuning on GPU.\n\n\nThe following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of performance and compares it to the reported performance of the official FNet-base model (called *FNet (Flax) - Official*). Note that the training hyperparameters of the reproduced models were not the same as the official model, so the performance may differ significantly for some tasks (for example: CoLA).\n\n\n\nWe can see that FNet-base achieves around 93% of BERT-base's performance on average.\n\n\nFor more details, please refer to the checkpoints linked with the scores. On overview of all fine-tuned checkpoints of the following table can be accessed here.", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nNote: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nNote: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.", "### BibTeX entry and citation info\n\n\nContributions\n-------------\n\n\nThanks to @gchhablani for adding this model." ]
[ "TAGS\n#transformers #pytorch #rust #fnet #pretraining #en #dataset-c4 #arxiv-2105.03824 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nFNet-base was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 512 tokens. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nFNet-base was fine-tuned and evaluated on the validation data of the GLUE benchamrk. The results of the official model (written in Flax) can be seen in Table 1 on page 7 of the official paper.\n\n\nFor comparison, this model (ported to PyTorch) was fine-tuned and evaluated using the official Hugging Face GLUE evaluation scripts alongside bert-base-cased for comparison.\nThe training was done on a single 16GB NVIDIA Tesla V100 GPU. For MRPC/WNLI, the models were trained for 5 epochs, while for other tasks, the models were trained for 3 epochs. A sequence length of 512 was used with batch size 16 and learning rate 2e-5.\n\n\nThe following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of fine-tuning speed. The format is *hour:min:seconds*. Note that the authors compared pre-traning speed in the official paper instead.\n\n\n\nOn average the PyTorch version of FNet-base requires *ca.* 32% less time for GLUE fine-tuning on GPU.\n\n\nThe following table summarizes the results for fnet-base (called *FNet (PyTorch) - Reproduced*) and bert-base-cased (called *Bert (PyTorch) - Reproduced*) in terms of performance and compares it to the reported performance of the official FNet-base model (called *FNet (Flax) - Official*). Note that the training hyperparameters of the reproduced models were not the same as the official model, so the performance may differ significantly for some tasks (for example: CoLA).\n\n\n\nWe can see that FNet-base achieves around 93% of BERT-base's performance on average.\n\n\nFor more details, please refer to the checkpoints linked with the scores. On overview of all fine-tuned checkpoints of the following table can be accessed here.", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nNote: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nNote: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.", "### BibTeX entry and citation info\n\n\nContributions\n-------------\n\n\nThanks to @gchhablani for adding this model." ]
null
transformers
# FNet large model Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/google-research/tree/master/f_net). This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective. Disclaimer: This model card has been written by [gchhablani](https://huggingface.co/gchhablani). ## Model description FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an `attention_mask`. It is pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FNet model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fnet) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: **Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].** ```python >>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline >>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-large") >>> model = FNetForMaskedLM.from_pretrained("google/fnet-large") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker("Hello I'm a [MASK] model.") [ {"sequence": "hello i'm a. model.", "score": 0.12840192019939423, "token": 16678, "token_str": "."}, {"sequence": "hello i'm a a model.", "score": 0.07460460811853409, "token": 8, "token_str": "a"}, {"sequence": "hello i'm a, model.", "score": 0.05011311173439026, "token": 16680, "token_str": ","}, {"sequence": "hello i'm a and model.", "score": 0.047409165650606155, "token": 36, "token_str": "and"}, {"sequence": "hello i'm a the model.", "score": 0.0269990973174572, "token": 13, "token_str": "the"}, ] ``` Here is how to use this model to get the features of a given text in PyTorch: **Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.** ```python from transformers import FNetTokenizer, FNetModel tokenizer = FNetTokenizer.from_pretrained("google/fnet-large") model = FNetModel.from_pretrained("google/fnet-large") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True, max_length=512) output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected: ```python >>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline >>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-large") >>> model = FNetForMaskedLM.from_pretrained("google/fnet-large") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker("The man worked as a [MASK].") [ {"sequence": "the man worked as a a.", "score": 0.39862048625946045, "token": 8, "token_str": "a"}, {"sequence": "the man worked as a the.", "score": 0.20786496996879578, "token": 13, "token_str": "the"}, {"sequence": "the man worked as a as.", "score": 0.012523212470114231, "token": 106, "token_str": "as"}, {"sequence": "the man worked as a an.", "score": 0.010838045738637447, "token": 102, "token_str": "an"}, {"sequence": "the man worked as a and.", "score": 0.006571347825229168, "token": 36, "token_str": "and"}, ] >>> unmasker("The woman worked as a [MASK].") [ {"sequence": "the woman worked as a the.", "score": 0.3320266902446747, "token": 13, "token_str": "the"}, {"sequence": "the woman worked as a a.", "score": 0.2591220438480377, "token": 8, "token_str": "a"}, {"sequence": "the woman worked as a as.", "score": 0.011250585317611694, "token": 106, "token_str": "as"}, {"sequence": "the woman worked as a an.", "score": 0.010153685696423054, "token": 102, "token_str": "an"}, {"sequence": "the woman worked as a and.", "score": 0.010126154869794846, "token": 36, "token_str": "and"}, ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The FNet model was pretrained on [C4](https://huggingface.co/datasets/c4), a cleaned version of the Common Crawl dataset. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 78/76 | 85 | 85 | 94 | 78 | 84 | 88 | 69| 81.9 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-03824, author = {James Lee{-}Thorp and Joshua Ainslie and Ilya Eckstein and Santiago Onta{\~{n}}{\'{o}}n}, title = {FNet: Mixing Tokens with Fourier Transforms}, journal = {CoRR}, volume = {abs/2105.03824}, year = {2021}, url = {https://arxiv.org/abs/2105.03824}, archivePrefix = {arXiv}, eprint = {2105.03824}, timestamp = {Fri, 14 May 2021 12:13:30 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-03824.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ## Contributions Thanks to [@gchhablani](https://huggingface.co/gchhablani) for adding this model.
{"language": "en", "license": "apache-2.0", "tags": ["fnet"], "datasets": ["c4"]}
google/fnet-large
null
[ "transformers", "pytorch", "fnet", "pretraining", "en", "dataset:c4", "arxiv:2105.03824", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2105.03824" ]
[ "en" ]
TAGS #transformers #pytorch #fnet #pretraining #en #dataset-c4 #arxiv-2105.03824 #license-apache-2.0 #endpoints_compatible #region-us
FNet large model ================ Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective. Disclaimer: This model card has been written by gchhablani. Model description ----------------- FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an 'attention\_mask'. It is pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. * Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FNet model as inputs. This model has the following configuration: * 24-layer * 1024 hidden dimension Intended uses & limitations --------------------------- You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK]. Here is how to use this model to get the features of a given text in PyTorch: Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass. ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected: This bias will also affect all fine-tuned versions of this model. Training data ------------- The FNet model was pretrained on C4, a cleaned version of the Common Crawl dataset. Training procedure ------------------ ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: * 15% of the tokens are masked. * In 80% of the cases, the masked tokens are replaced by '[MASK]'. * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. * In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results ------------------ When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: ### BibTeX entry and citation info Contributions ------------- Thanks to @gchhablani for adding this model.
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nNote: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nNote: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe FNet model was pretrained on C4, a cleaned version of the Common Crawl dataset.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 512 tokens. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:", "### BibTeX entry and citation info\n\n\nContributions\n-------------\n\n\nThanks to @gchhablani for adding this model." ]
[ "TAGS\n#transformers #pytorch #fnet #pretraining #en #dataset-c4 #arxiv-2105.03824 #license-apache-2.0 #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nNote: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nNote: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. However, the model's MLM accuracy may also affect answers. Given below are some example where gender-bias could be expected:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe FNet model was pretrained on C4, a cleaned version of the Common Crawl dataset.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 512 tokens. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:", "### BibTeX entry and citation info\n\n\nContributions\n-------------\n\n\nThanks to @gchhablani for adding this model." ]
null
transformers
## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This checkpoint is the original MobileBert Optimized Uncased English: [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) checkpoint. ## How to use MobileBERT in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/mobilebert-uncased", tokenizer="google/mobilebert-uncased" ) print( fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
{"language": "en", "license": "apache-2.0", "thumbnail": "https://huggingface.co/front/thumbnails/google.png"}
google/mobilebert-uncased
null
[ "transformers", "pytorch", "tf", "rust", "mobilebert", "pretraining", "en", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #rust #mobilebert #pretraining #en #license-apache-2.0 #endpoints_compatible #has_space #region-us
## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This checkpoint is the original MobileBert Optimized Uncased English: uncased_L-24_H-128_B-512_A-4_F-4_OPT checkpoint. ## How to use MobileBERT in 'transformers'
[ "## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices\n\nMobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance\nbetween self-attentions and feed-forward networks.\n\nThis checkpoint is the original MobileBert Optimized Uncased English: \nuncased_L-24_H-128_B-512_A-4_F-4_OPT \ncheckpoint.", "## How to use MobileBERT in 'transformers'" ]
[ "TAGS\n#transformers #pytorch #tf #rust #mobilebert #pretraining #en #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices\n\nMobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance\nbetween self-attentions and feed-forward networks.\n\nThis checkpoint is the original MobileBert Optimized Uncased English: \nuncased_L-24_H-128_B-512_A-4_F-4_OPT \ncheckpoint.", "## How to use MobileBERT in 'transformers'" ]
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/mt5-base
null
[ "transformers", "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11934" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
[ "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
[ "TAGS\n#transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/mt5-large
null
[ "transformers", "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11934" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
[ "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
[ "TAGS\n#transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/mt5-small
null
[ "transformers", "pytorch", "tf", "jax", "onnx", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11934" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #onnx #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
[ "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
[ "TAGS\n#transformers #pytorch #tf #jax #onnx #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/mt5-xl
null
[ "transformers", "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11934" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
[ "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
[ "TAGS\n#transformers #pytorch #tf #jax #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
text2text-generation
transformers
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
{"language": ["multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", false, "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu"], "license": "apache-2.0", "datasets": ["mc4"]}
google/mt5-xxl
null
[ "transformers", "pytorch", "tf", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11934" ]
[ "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #tf #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
[ "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
[ "TAGS\n#transformers #pytorch #tf #mt5 #text2text-generation #multilingual #af #am #ar #az #be #bg #bn #ca #ceb #co #cs #cy #da #de #el #en #eo #es #et #eu #fa #fi #fil #fr #fy #ga #gd #gl #gu #ha #haw #hi #hmn #ht #hu #hy #ig #is #it #iw #ja #jv #ka #kk #km #kn #ko #ku #ky #la #lb #lo #lt #lv #mg #mi #mk #ml #mn #mr #ms #mt #my #ne #nl #no #ny #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #sm #sn #so #sq #sr #st #su #sv #sw #ta #te #tg #th #tr #uk #und #ur #uz #vi #xh #yi #yo #zh #zu #dataset-mc4 #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Abstract\n\nThe recent \"Text-to-Text Transfer Transformer\" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available." ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 0k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 0k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_0k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_0k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_0k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_0k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_0k"]}
google/multiberts-seed_0-step_0k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_0k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_0k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 0k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 0k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 0k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 0k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_0k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 0k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 0k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1000k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1000k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1000k"]}
google/multiberts-seed_0-step_1000k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1000k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_100k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_100k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_100k"]}
google/multiberts-seed_0-step_100k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_100k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1100k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1100k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1100k"]}
google/multiberts-seed_0-step_1100k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1100k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1200k"]}
google/multiberts-seed_0-step_1200k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1200k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 120k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_120k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_120k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_120k"]}
google/multiberts-seed_0-step_120k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_120k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_120k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 120k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 120k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 120k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_120k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 120k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 120k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1300k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1300k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1300k"]}
google/multiberts-seed_0-step_1300k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1300k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1400k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1400k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1400k"]}
google/multiberts-seed_0-step_1400k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1400k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 140k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 140k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_140k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_140k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_140k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_140k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_140k"]}
google/multiberts-seed_0-step_140k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_140k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_140k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 140k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 140k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 140k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 140k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_140k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 140k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 140k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1500k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1500k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1500k"]}
google/multiberts-seed_0-step_1500k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1500k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1600k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1600k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1600k"]}
google/multiberts-seed_0-step_1600k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1600k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 160k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 160k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_160k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_160k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_160k"]}
google/multiberts-seed_0-step_160k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_160k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_160k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 160k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 160k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 160k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 160k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_160k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 160k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 160k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1700k"]}
google/multiberts-seed_0-step_1700k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1700k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1800k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1800k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1800k"]}
google/multiberts-seed_0-step_1800k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1800k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 180k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 180k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_180k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_180k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_180k"]}
google/multiberts-seed_0-step_180k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_180k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_180k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 180k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 180k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 180k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 180k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_180k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 180k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 180k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 1900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1900k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1900k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1900k"]}
google/multiberts-seed_0-step_1900k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_1900k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 1900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_1900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 1900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 2000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_2000k"]}
google/multiberts-seed_0-step_2000k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_2000k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_2000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 2000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 2000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_2000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 2000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_200k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_200k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_200k"]}
google/multiberts-seed_0-step_200k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_200k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 20k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_20k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_20k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_20k"]}
google/multiberts-seed_0-step_20k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_20k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_20k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 20k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 20k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 20k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_20k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 20k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 20k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_300k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_300k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_300k"]}
google/multiberts-seed_0-step_300k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_300k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_400k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_400k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_400k"]}
google/multiberts-seed_0-step_400k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_400k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 40k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 40k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_40k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_40k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_40k"]}
google/multiberts-seed_0-step_40k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_40k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 40k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 40k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 40k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 40k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_500k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_500k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_500k"]}
google/multiberts-seed_0-step_500k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_500k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_600k"]}
google/multiberts-seed_0-step_600k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_600k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 60k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_60k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_60k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_60k"]}
google/multiberts-seed_0-step_60k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_60k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 60k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 60k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 60k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_700k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_700k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_700k"]}
google/multiberts-seed_0-step_700k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_700k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_800k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_800k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_800k"]}
google/multiberts-seed_0-step_800k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_800k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 80k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 80k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_80k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_80k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_80k"]}
google/multiberts-seed_0-step_80k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_80k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_80k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 80k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 80k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 80k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 80k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_80k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 80k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 80k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_900k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_900k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0", "multiberts-seed_0-step_900k"]}
google/multiberts-seed_0-step_900k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "multiberts-seed_0-step_900k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0, captured at step 900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #multiberts-seed_0-step_900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0, captured at step 900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs - Seed 0 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0. ## Model Description This model is a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0') model = TFBertModel.from_pretrained("google/multiberts-seed_0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0') model = BertModel.from_pretrained("google/multiberts-seed_0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_0"]}
google/multiberts-seed_0
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_0", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs - Seed 0 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #0. ## Model Description This model is a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs - Seed 0\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0.", "## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_0 #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs - Seed 0\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #0.", "## Model Description\n\nThis model is a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 0k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 0k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_0k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_0k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_0k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_0k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_0k"]}
google/multiberts-seed_1-step_0k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_0k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_0k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 0k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 0k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 0k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 0k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_0k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 0k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 0k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1000k"]}
google/multiberts-seed_1-step_1000k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1000k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_100k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_100k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_100k"]}
google/multiberts-seed_1-step_100k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_100k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1100k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1100k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1100k"]}
google/multiberts-seed_1-step_1100k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1100k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1100k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1100k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1100k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1100k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1200k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1200k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1200k"]}
google/multiberts-seed_1-step_1200k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1200k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 120k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_120k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_120k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_120k"]}
google/multiberts-seed_1-step_120k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_120k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_120k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 120k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 120k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 120k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_120k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 120k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 120k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1300k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1300k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1300k"]}
google/multiberts-seed_1-step_1300k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1300k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1400k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1400k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1400k"]}
google/multiberts-seed_1-step_1400k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1400k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 140k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 140k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_140k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_140k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_140k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_140k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_140k"]}
google/multiberts-seed_1-step_140k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_140k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_140k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 140k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 140k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 140k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 140k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_140k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 140k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 140k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1500k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1500k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1500k"]}
google/multiberts-seed_1-step_1500k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1500k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1600k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1600k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1600k"]}
google/multiberts-seed_1-step_1600k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1600k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 160k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 160k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_160k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_160k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_160k"]}
google/multiberts-seed_1-step_160k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_160k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_160k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 160k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 160k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 160k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 160k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_160k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 160k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 160k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1700k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1700k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1700k"]}
google/multiberts-seed_1-step_1700k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1700k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1800k"]}
google/multiberts-seed_1-step_1800k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1800k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1800k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 180k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 180k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_180k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_180k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_180k"]}
google/multiberts-seed_1-step_180k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_180k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_180k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 180k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 180k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 180k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 180k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_180k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 180k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 180k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1900k"]}
google/multiberts-seed_1-step_1900k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_1900k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 1900k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_1900k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 1900k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 2000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_2000k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_2000k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_2000k"]}
google/multiberts-seed_1-step_2000k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_2000k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_2000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 2000k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 2000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 2000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_2000k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 2000k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 2000k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_200k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_200k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_200k"]}
google/multiberts-seed_1-step_200k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_200k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 200k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 200k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_200k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 200k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 200k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 20k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_20k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_20k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_20k"]}
google/multiberts-seed_1-step_20k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_20k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_20k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 20k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 20k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 20k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_20k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 20k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 20k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_300k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_300k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_300k"]}
google/multiberts-seed_1-step_300k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_300k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 300k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 300k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_300k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 300k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 300k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_400k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_400k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_400k"]}
google/multiberts-seed_1-step_400k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_400k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 400k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 400k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_400k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 400k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 400k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 40k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 40k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_40k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_40k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_40k"]}
google/multiberts-seed_1-step_40k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_40k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 40k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 40k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 40k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_40k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 40k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 40k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_500k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_500k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_500k"]}
google/multiberts-seed_1-step_500k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_500k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 500k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_500k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 500k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 500k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_600k"]}
google/multiberts-seed_1-step_600k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_600k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 600k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_600k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 600k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 60k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 60k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_60k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_60k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_60k"]}
google/multiberts-seed_1-step_60k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_60k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 60k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 60k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 60k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_60k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 60k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 60k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
null
transformers
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["multiberts", "multiberts-seed_1", "multiberts-seed_1-step_700k"]}
google/multiberts-seed_1-step_700k
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "multiberts", "multiberts-seed_1", "multiberts-seed_1-step_700k", "en", "arxiv:2106.16163", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.16163", "1908.08962" ]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as the original BERT model but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through URL We describe them in our paper The MultiBERTs: BERT Reproductions for Robustness Analysis. This is model #1, captured at step 700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of BERT-base uncased, for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to BERT-base uncased. Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for Turc et al., 2019. This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our technical report for more details. ### How to use Using code from BERT-base uncased, here is an example based on Tensorflow: PyTorch version: info
[ "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #multiberts #multiberts-seed_1 #multiberts-seed_1-step_700k #en #arxiv-2106.16163 #arxiv-1908.08962 #license-apache-2.0 #endpoints_compatible #region-us \n", "# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k\n\nMultiBERTs is a collection of checkpoints and a statistical library to support\nrobust research on BERT. We provide 25 BERT-base models trained with\nsimilar hyper-parameters as\nthe original BERT model but\nwith different random seeds, which causes variations in the initial weights and order of\ntraining instances. The aim is to distinguish findings that apply to a specific\nartifact (i.e., a particular instance of the model) from those that apply to the\nmore general procedure.\n\nWe also provide 140 intermediate checkpoints captured\nduring the course of pre-training (we saved 28 checkpoints for the first 5 runs).\n\nThe models were originally released through\nURL We describe them in our\npaper\nThe MultiBERTs: BERT Reproductions for Robustness Analysis.\n\nThis is model #1, captured at step 700k (max: 2000k, i.e., 2M steps).", "## Model Description\n\nThis model was captured during a reproduction of\nBERT-base uncased, for English: it\nis a Transformers model pretrained on a large corpus of English data, using the\nMasked Language Modelling (MLM) and the Next Sentence Prediction (NSP)\nobjectives.\n\nThe intended uses, limitations, training data and training procedure for the fully trained model are similar\nto BERT-base uncased. Two major\ndifferences with the original model:\n\n* We pre-trained the MultiBERTs models for 2 million steps using sequence\n length 512 (instead of 1 million steps using sequence length 128 then 512).\n* We used an alternative version of Wikipedia and Books Corpus, initially\n collected for Turc et al., 2019.\n\nThis is a best-effort reproduction, and so it is probable that differences with\nthe original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original\nBERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).\nSee our technical report for more details.", "### How to use\n\nUsing code from\nBERT-base uncased, here is an example based on\nTensorflow:\n\n\n\nPyTorch version:\n\n\n\ninfo" ]