pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
fill-mask
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-180g-base-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-180g-large-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-180g-large-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-180g-small-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #pretraining #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #pretraining #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-180g-small-ex-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-180g-small-ex-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-180g-small-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is trained on 180G data, we recommend using this one than the original version.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-base-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-base-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-large-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-large-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-small-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-electra-small-ex-discriminator
null
[ "transformers", "pytorch", "tf", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-small-ex-generator
null
[ "transformers", "pytorch", "tf", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "pipeline_tag": "fill-mask"}
hfl/chinese-electra-small-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
Please use 'ElectraForPreTraining' for 'discriminator' and 'ElectraForMaskedLM' for 'generator' if you are re-training these models. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-base-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-base-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-large-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-large-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
null
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-small-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #pretraining #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #pretraining #zh #arxiv-2004.13922 #license-apache-2.0 #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-legal-electra-small-generator
null
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #electra #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This model is specifically designed for legal domain.", "## Chinese ELECTRA\nGoogle and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.\nFor further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.\nELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.\n\nThis project is based on the official code of ELECTRA: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
<p align="center"> <br> <img src="https://github.com/ymcui/MacBERT/raw/master/pics/banner.png" width="500"/> <br> </p> <p align="center"> <a href="https://github.com/ymcui/MacBERT/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/ymcui/MacBERT.svg?color=blue&style=flat-square"> </a> </p> # Please use 'Bert' related functions to load this model! This repository contains the resources in our paper **"Revisiting Pre-trained Models for Chinese Natural Language Processing"**, which will be published in "[Findings of EMNLP](https://2020.emnlp.org)". You can read our camera-ready paper through [ACL Anthology](#) or [arXiv pre-print](https://arxiv.org/abs/2004.13922). **[Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)** *Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu* You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Introduction **MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning. Instead of masking with [MASK] token, which never appears in the fine-tuning stage, **we propose to use similar words for the masking purpose**. A similar word is obtained by using [Synonyms toolkit (Wang and Hu, 2017)](https://github.com/chatopera/Synonyms), which is based on word2vec (Mikolov et al., 2013) similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement. Here is an example of our pre-training task. | | Example | | -------------- | ----------------- | | **Original Sentence** | we use a language model to predict the probability of the next word. | | **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . | | **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . | | **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . | | **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . | Except for the new pre-training task, we also incorporate the following techniques. - Whole Word Masking (WWM) - N-gram masking - Sentence-Order Prediction (SOP) **Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.** For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922) ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/chinese-macbert-base
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
![](URL width=) <a href="URL <img alt="GitHub" src="URL </a> Please use 'Bert' related functions to load this model! ======================================================= This repository contains the resources in our paper "Revisiting Pre-trained Models for Chinese Natural Language Processing", which will be published in "Findings of EMNLP". You can read our camera-ready paper through ACL Anthology or arXiv pre-print. Revisiting Pre-trained Models for Chinese Natural Language Processing You may also interested in, * Chinese BERT series: URL * Chinese ELECTRA: URL * Chinese XLNet: URL * Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL Introduction ------------ MacBERT is an improved BERT with novel MLM as correction pre-training task, which mitigates the discrepancy of pre-training and fine-tuning. Instead of masking with [MASK] token, which never appears in the fine-tuning stage, we propose to use similar words for the masking purpose. A similar word is obtained by using Synonyms toolkit (Wang and Hu, 2017), which is based on word2vec (Mikolov et al., 2013) similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement. Here is an example of our pre-training task. Except for the new pre-training task, we also incorporate the following techniques. * Whole Word Masking (WWM) * N-gram masking * Sentence-Order Prediction (SOP) Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture. For more technical details, please check our paper: Revisiting Pre-trained Models for Chinese Natural Language Processing If you find our resource or paper is useful, please consider including the following citation in your paper. * URL
[]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
<p align="center"> <br> <img src="https://github.com/ymcui/MacBERT/raw/master/pics/banner.png" width="500"/> <br> </p> <p align="center"> <a href="https://github.com/ymcui/MacBERT/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/ymcui/MacBERT.svg?color=blue&style=flat-square"> </a> </p> # Please use 'Bert' related functions to load this model! This repository contains the resources in our paper **"Revisiting Pre-trained Models for Chinese Natural Language Processing"**, which will be published in "[Findings of EMNLP](https://2020.emnlp.org)". You can read our camera-ready paper through [ACL Anthology](#) or [arXiv pre-print](https://arxiv.org/abs/2004.13922). **[Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)** *Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu* You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Introduction **MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning. Instead of masking with [MASK] token, which never appears in the fine-tuning stage, **we propose to use similar words for the masking purpose**. A similar word is obtained by using [Synonyms toolkit (Wang and Hu, 2017)](https://github.com/chatopera/Synonyms), which is based on word2vec (Mikolov et al., 2013) similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement. Here is an example of our pre-training task. | | Example | | -------------- | ----------------- | | **Original Sentence** | we use a language model to predict the probability of the next word. | | **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . | | **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . | | **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . | | **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . | Except for the new pre-training task, we also incorporate the following techniques. - Whole Word Masking (WWM) - N-gram masking - Sentence-Order Prediction (SOP) **Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.** For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922) ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/chinese-macbert-large
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
![](URL width=) <a href="URL <img alt="GitHub" src="URL </a> Please use 'Bert' related functions to load this model! ======================================================= This repository contains the resources in our paper "Revisiting Pre-trained Models for Chinese Natural Language Processing", which will be published in "Findings of EMNLP". You can read our camera-ready paper through ACL Anthology or arXiv pre-print. Revisiting Pre-trained Models for Chinese Natural Language Processing You may also interested in, * Chinese BERT series: URL * Chinese ELECTRA: URL * Chinese XLNet: URL * Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL Introduction ------------ MacBERT is an improved BERT with novel MLM as correction pre-training task, which mitigates the discrepancy of pre-training and fine-tuning. Instead of masking with [MASK] token, which never appears in the fine-tuning stage, we propose to use similar words for the masking purpose. A similar word is obtained by using Synonyms toolkit (Wang and Hu, 2017), which is based on word2vec (Mikolov et al., 2013) similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement. Here is an example of our pre-training task. Except for the new pre-training task, we also incorporate the following techniques. * Whole Word Masking (WWM) * N-gram masking * Sentence-Order Prediction (SOP) Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture. For more technical details, please check our paper: Revisiting Pre-trained Models for Chinese Natural Language Processing If you find our resource or paper is useful, please consider including the following citation in your paper. * URL
[]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
# Please use 'Bert' related functions to load this model! Under construction... Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
{"language": ["zh"], "license": "cc-by-nc-sa-4.0"}
hfl/chinese-pert-base
null
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "zh", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #tf #bert #feature-extraction #zh #license-cc-by-nc-sa-4.0 #endpoints_compatible #has_space #region-us
# Please use 'Bert' related functions to load this model! Under construction... Please visit our GitHub repo for more information: URL
[ "# Please use 'Bert' related functions to load this model!\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
[ "TAGS\n#transformers #pytorch #tf #bert #feature-extraction #zh #license-cc-by-nc-sa-4.0 #endpoints_compatible #has_space #region-us \n", "# Please use 'Bert' related functions to load this model!\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
feature-extraction
transformers
# Please use 'Bert' related functions to load this model! Under construction... Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
{"language": ["zh"], "license": "cc-by-nc-sa-4.0"}
hfl/chinese-pert-large
null
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "zh", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #tf #bert #feature-extraction #zh #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# Please use 'Bert' related functions to load this model! Under construction... Please visit our GitHub repo for more information: URL
[ "# Please use 'Bert' related functions to load this model!\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
[ "TAGS\n#transformers #pytorch #tf #bert #feature-extraction #zh #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# Please use 'Bert' related functions to load this model!\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
fill-mask
transformers
# Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/chinese-roberta-wwm-ext-large
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# Please use 'Bert' related functions to load this model!", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Please use 'Bert' related functions to load this model!", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
fill-mask
transformers
# Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/chinese-roberta-wwm-ext
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# Please use 'Bert' related functions to load this model!", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Please use 'Bert' related functions to load this model!", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
text-generation
transformers
## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-xlnet-base
null
[ "transformers", "pytorch", "tf", "xlnet", "text-generation", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #xlnet #text-generation #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese Pre-Trained XLNet\nThis project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.\nWe welcome all experts and scholars to download and use this model.\n\nThis project is based on CMU/Google official XLNet: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlnet #text-generation #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Chinese Pre-Trained XLNet\nThis project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.\nWe welcome all experts and scholars to download and use this model.\n\nThis project is based on CMU/Google official XLNet: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
text-generation
transformers
## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
{"language": ["zh"], "license": "apache-2.0"}
hfl/chinese-xlnet-mid
null
[ "transformers", "pytorch", "tf", "xlnet", "text-generation", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #xlnet #text-generation #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: URL You may also interested in, - Chinese BERT series: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find our resource or paper is useful, please consider including the following citation in your paper. - URL
[ "## Chinese Pre-Trained XLNet\nThis project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.\nWe welcome all experts and scholars to download and use this model.\n\nThis project is based on CMU/Google official XLNet: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlnet #text-generation #zh #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## Chinese Pre-Trained XLNet\nThis project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection.\nWe welcome all experts and scholars to download and use this model.\n\nThis project is based on CMU/Google official XLNet: URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\n\nIf you find our resource or paper is useful, please consider including the following citation in your paper.\n- URL" ]
fill-mask
transformers
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM You may also interested in, Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
{"language": ["zh", "bo", "kk", "ko", "mn", "ug", "yue"], "license": "apache-2.0"}
hfl/cino-base-v2
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "zh", "bo", "kk", "ko", "mn", "ug", "yue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh", "bo", "kk", "ko", "mn", "ug", "yue" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): URL You may also interested in, Chinese MacBERT: URL Chinese BERT series: URL Chinese ELECTRA: URL Chinese XLNet: URL Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL
[ "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
fill-mask
transformers
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM You may also interested in, Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
{"language": ["zh", "bo", "kk", "ko", "mn", "ug", "yue"], "license": "apache-2.0"}
hfl/cino-large-v2
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "zh", "bo", "kk", "ko", "mn", "ug", "yue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh", "bo", "kk", "ko", "mn", "ug", "yue" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): URL You may also interested in, Chinese MacBERT: URL Chinese BERT series: URL Chinese ELECTRA: URL Chinese XLNet: URL Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL
[ "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
fill-mask
transformers
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM You may also interested in, Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
{"language": ["zh", "bo", "kk", "ko", "mn", "ug", "yue"], "license": "apache-2.0"}
hfl/cino-large
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "zh", "bo", "kk", "ko", "mn", "ug", "yue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh", "bo", "kk", "ko", "mn", "ug", "yue" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): URL You may also interested in, Chinese MacBERT: URL Chinese BERT series: URL Chinese ELECTRA: URL Chinese XLNet: URL Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL
[ "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
fill-mask
transformers
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM You may also interested in, Chinese MacBERT: https://github.com/ymcui/MacBERT Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA Chinese XLNet: https://github.com/ymcui/Chinese-XLNet Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology
{"language": ["zh", "bo", "kk", "ko", "mn", "ug", "yue"], "license": "apache-2.0"}
hfl/cino-small-v2
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "zh", "bo", "kk", "ko", "mn", "ug", "yue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "zh", "bo", "kk", "ko", "mn", "ug", "yue" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型) Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding. We have seen rapid progress on building multilingual PLMs in recent year. However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems. To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as - Chinese,中文(zh) - Tibetan,藏语(bo) - Mongolian (Uighur form),蒙语(mn) - Uyghur,维吾尔语(ug) - Kazakh (Arabic form),哈萨克语(kk) - Korean,朝鲜语(ko) - Zhuang,壮语 - Cantonese,粤语(yue) Please read our GitHub repository for more details (Chinese): URL You may also interested in, Chinese MacBERT: URL Chinese BERT series: URL Chinese ELECTRA: URL Chinese XLNet: URL Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL
[ "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #zh #bo #kk #ko #mn #ug #yue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)\n\nMultilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.\nWe have seen rapid progress on building multilingual PLMs in recent year.\nHowever, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.\n\nTo address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as \n- Chinese,中文(zh)\n- Tibetan,藏语(bo)\n- Mongolian (Uighur form),蒙语(mn)\n- Uyghur,维吾尔语(ug)\n- Kazakh (Arabic form),哈萨克语(kk)\n- Korean,朝鲜语(ko)\n- Zhuang,壮语\n- Cantonese,粤语(yue)\n\nPlease read our GitHub repository for more details (Chinese): URL\n\nYou may also interested in,\n\nChinese MacBERT: URL \nChinese BERT series: URL \nChinese ELECTRA: URL \nChinese XLNet: URL \nKnowledge Distillation Toolkit - TextBrewer: URL \n\nMore resources by HFL: URL" ]
feature-extraction
transformers
# Please use 'Bert' related functions to load this model! # ALL English models are UNCASED (lowercase=True) Under construction... Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
{"language": ["en"], "license": "cc-by-nc-sa-4.0"}
hfl/english-pert-base
null
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "en", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #feature-extraction #en #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# Please use 'Bert' related functions to load this model! # ALL English models are UNCASED (lowercase=True) Under construction... Please visit our GitHub repo for more information: URL
[ "# Please use 'Bert' related functions to load this model!", "# ALL English models are UNCASED (lowercase=True)\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
[ "TAGS\n#transformers #pytorch #tf #bert #feature-extraction #en #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# Please use 'Bert' related functions to load this model!", "# ALL English models are UNCASED (lowercase=True)\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
feature-extraction
transformers
# Please use 'Bert' related functions to load this model! # ALL English models are UNCASED (lowercase=True) Under construction... Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
{"language": ["en"], "license": "cc-by-nc-sa-4.0"}
hfl/english-pert-large
null
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "en", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tf #bert #feature-extraction #en #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# Please use 'Bert' related functions to load this model! # ALL English models are UNCASED (lowercase=True) Under construction... Please visit our GitHub repo for more information: URL
[ "# Please use 'Bert' related functions to load this model!", "# ALL English models are UNCASED (lowercase=True)\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
[ "TAGS\n#transformers #pytorch #tf #bert #feature-extraction #en #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# Please use 'Bert' related functions to load this model!", "# ALL English models are UNCASED (lowercase=True)\r\n\r\nUnder construction...\r\n\r\nPlease visit our GitHub repo for more information: URL" ]
fill-mask
transformers
# This is a re-trained 3-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"], "pipeline_tag": "fill-mask"}
hfl/rbt3
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This is a re-trained 3-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# This is a re-trained 3-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This is a re-trained 3-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
fill-mask
transformers
# This is a re-trained 4-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/rbt4
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This is a re-trained 4-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# This is a re-trained 4-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This is a re-trained 4-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
fill-mask
transformers
# This is a re-trained 6-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/rbt6
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This is a re-trained 6-layer RoBERTa-wwm-ext model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# This is a re-trained 6-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This is a re-trained 6-layer RoBERTa-wwm-ext model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
fill-mask
transformers
# This is a re-trained 3-layer RoBERTa-wwm-ext-large model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
{"language": ["zh"], "license": "apache-2.0", "tags": ["bert"]}
hfl/rbtl3
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1906.08101", "2004.13922" ]
[ "zh" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# This is a re-trained 3-layer RoBERTa-wwm-ext-large model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. Pre-Training with Whole Word Masking for Chinese BERT Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:URL You may also interested in, - Chinese BERT series: URL - Chinese MacBERT: URL - Chinese ELECTRA: URL - Chinese XLNet: URL - Knowledge Distillation Toolkit - TextBrewer: URL More resources by HFL: URL If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: URL - Secondary: URL
[ "# This is a re-trained 3-layer RoBERTa-wwm-ext-large model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #zh #arxiv-1906.08101 #arxiv-2004.13922 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# This is a re-trained 3-layer RoBERTa-wwm-ext-large model.", "## Chinese BERT with Whole Word Masking\nFor further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking. \n\nPre-Training with Whole Word Masking for Chinese BERT \nYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu\n\nThis repository is developed based on:URL\n\nYou may also interested in,\n- Chinese BERT series: URL\n- Chinese MacBERT: URL\n- Chinese ELECTRA: URL\n- Chinese XLNet: URL\n- Knowledge Distillation Toolkit - TextBrewer: URL\n\nMore resources by HFL: URL\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper.\n- Primary: URL\n\n- Secondary: URL" ]
image-classification
transformers
# fruits Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### apple ![apple](images/apple.jpg) #### banana ![banana](images/banana.jpg) #### mango ![mango](images/mango.jpg) #### orange ![orange](images/orange.jpg) #### tomato ![tomato](images/tomato.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
hgarg/fruits
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# fruits Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### apple !apple #### banana !banana #### mango !mango #### orange !orange #### tomato !tomato
[ "# fruits\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### apple\n\n!apple", "#### banana\n\n!banana", "#### mango\n\n!mango", "#### orange\n\n!orange", "#### tomato\n\n!tomato" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# fruits\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### apple\n\n!apple", "#### banana\n\n!banana", "#### mango\n\n!mango", "#### orange\n\n!orange", "#### tomato\n\n!tomato" ]
image-classification
transformers
# indian-snacks Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dosa ![dosa](images/dosa.jpg) #### idli ![idli](images/idli.jpg) #### naan ![naan](images/naan.jpg) #### samosa ![samosa](images/samosa.jpg) #### vada ![vada](images/vada.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
hgarg/indian-snacks
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# indian-snacks Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### dosa !dosa #### idli !idli #### naan !naan #### samosa !samosa #### vada !vada
[ "# indian-snacks\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### dosa\n\n!dosa", "#### idli\n\n!idli", "#### naan\n\n!naan", "#### samosa\n\n!samosa", "#### vada\n\n!vada" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# indian-snacks\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### dosa\n\n!dosa", "#### idli\n\n!idli", "#### naan\n\n!naan", "#### samosa\n\n!samosa", "#### vada\n\n!vada" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-fa-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4404 - Wer: 0.4402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.083 | 0.75 | 300 | 3.0037 | 1.0 | | 1.5795 | 1.5 | 600 | 0.9167 | 0.7638 | | 0.658 | 2.25 | 900 | 0.5737 | 0.5595 | | 0.4213 | 3.0 | 1200 | 0.4404 | 0.4402 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-fa-colab", "results": []}]}
hgharibi/wav2vec2-xls-r-300m-fa-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-xls-r-300m-fa-colab ============================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.4404 * Wer: 0.4402 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
# BETO(cased) This model was built using pytorch. ## Model description Input for the model: Any spanish text Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate)) #### How to use Here is how to use this model to get the features of a given text in *PyTorch*: ```python # You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification") model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training procedure I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
{"language": ["es"], "license": "apache-2.0", "tags": ["es", "ticket classification"], "datasets": ["self made to classify whether text is related to technology or not."], "metrics": ["fscore", "accuracy", "precision", "recall"]}
hiiamsid/BETO_es_binary_classification
null
[ "transformers", "pytorch", "bert", "text-classification", "es", "ticket classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #bert #text-classification #es #ticket classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# BETO(cased) This model was built using pytorch. ## Model description Input for the model: Any spanish text Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate)) #### How to use Here is how to use this model to get the features of a given text in *PyTorch*: ## Training procedure I trained on the dataset on the dccuchile/bert-base-spanish-wwm-cased.
[ "# BETO(cased)\nThis model was built using pytorch.", "## Model description\nInput for the model: Any spanish text\nOutput for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))", "#### How to use\nHere is how to use this model to get the features of a given text in *PyTorch*:", "## Training procedure\nI trained on the dataset on the dccuchile/bert-base-spanish-wwm-cased." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #es #ticket classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BETO(cased)\nThis model was built using pytorch.", "## Model description\nInput for the model: Any spanish text\nOutput for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))", "#### How to use\nHere is how to use this model to get the features of a given text in *PyTorch*:", "## Training procedure\nI trained on the dataset on the dccuchile/bert-base-spanish-wwm-cased." ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20684327 - CO2 Emissions (in grams): 437.2441955971972 ## Validation Metrics - Loss: nan - Rouge1: 3.7729 - Rouge2: 0.4152 - RougeL: 3.5066 - RougeLsum: 3.5167 - Gen Len: 5.0577 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684327 ```
{"language": "es", "tags": "autonlp", "datasets": ["hiiamsid/autonlp-data-Summarization"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 437.2441955971972}
hiiamsid/autonlp-Summarization-20684327
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autonlp", "es", "dataset:hiiamsid/autonlp-data-Summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #mt5 #text2text-generation #autonlp #es #dataset-hiiamsid/autonlp-data-Summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20684327 - CO2 Emissions (in grams): 437.2441955971972 ## Validation Metrics - Loss: nan - Rouge1: 3.7729 - Rouge2: 0.4152 - RougeL: 3.5066 - RougeLsum: 3.5167 - Gen Len: 5.0577 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20684327\n- CO2 Emissions (in grams): 437.2441955971972", "## Validation Metrics\n\n- Loss: nan\n- Rouge1: 3.7729\n- Rouge2: 0.4152\n- RougeL: 3.5066\n- RougeLsum: 3.5167\n- Gen Len: 5.0577", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autonlp #es #dataset-hiiamsid/autonlp-data-Summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20684327\n- CO2 Emissions (in grams): 437.2441955971972", "## Validation Metrics\n\n- Loss: nan\n- Rouge1: 3.7729\n- Rouge2: 0.4152\n- RougeL: 3.5066\n- RougeLsum: 3.5167\n- Gen Len: 5.0577", "## Usage\n\nYou can use cURL to access this model:" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20684328 - CO2 Emissions (in grams): 1133.9679082840014 ## Validation Metrics - Loss: nan - Rouge1: 9.4193 - Rouge2: 0.91 - RougeL: 7.9376 - RougeLsum: 8.0076 - Gen Len: 10.65 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328 ```
{"language": "es", "tags": "autonlp", "datasets": ["hiiamsid/autonlp-data-Summarization"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1133.9679082840014}
hiiamsid/autonlp-Summarization-20684328
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autonlp", "es", "dataset:hiiamsid/autonlp-data-Summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #mt5 #text2text-generation #autonlp #es #dataset-hiiamsid/autonlp-data-Summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20684328 - CO2 Emissions (in grams): 1133.9679082840014 ## Validation Metrics - Loss: nan - Rouge1: 9.4193 - Rouge2: 0.91 - RougeL: 7.9376 - RougeLsum: 8.0076 - Gen Len: 10.65 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20684328\n- CO2 Emissions (in grams): 1133.9679082840014", "## Validation Metrics\n\n- Loss: nan\n- Rouge1: 9.4193\n- Rouge2: 0.91\n- RougeL: 7.9376\n- RougeLsum: 8.0076\n- Gen Len: 10.65", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autonlp #es #dataset-hiiamsid/autonlp-data-Summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20684328\n- CO2 Emissions (in grams): 1133.9679082840014", "## Validation Metrics\n\n- Loss: nan\n- Rouge1: 9.4193\n- Rouge2: 0.91\n- RougeL: 7.9376\n- RougeLsum: 8.0076\n- Gen Len: 10.65", "## Usage\n\nYou can use cURL to access this model:" ]
text2text-generation
transformers
This is the finetuned model of hiiamsid/est5-base for Question Generation task. * Here input is the context only and output is questions. No information regarding answers were given to model. * Unfortunately, due to lack of sufficient resources it is fine tuned with batch_size=10 and num_seq_len=256. So, if too large context is given model may not get information about last portions. ``` from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'hiiamsid/est5-base-qg' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def generate_question(text, beams=10, grams=2, num_return_seq=10,max_size=256): x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) out = model.generate(**x, no_repeat_ngram_size=grams, num_beams=beams, num_return_sequences=num_return_seq, max_length=max_size) return tokenizer.decode(out[0], skip_special_tokens=True) print(generate_question('Any context in spanish from which question is to be generated')) ``` ## Citing & Authors - Datasets : [squad_es](https://huggingface.co/datasets/squad_es) - Model : [hiiamsid/est5-base](hiiamsid/est5-base)
{"language": ["es"], "license": "mit", "tags": ["spanish", "question generation", "qg"], "Datasets": ["SQUAD"]}
hiiamsid/est5-base-qg
null
[ "transformers", "pytorch", "t5", "text2text-generation", "spanish", "question generation", "qg", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #t5 #text2text-generation #spanish #question generation #qg #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is the finetuned model of hiiamsid/est5-base for Question Generation task. * Here input is the context only and output is questions. No information regarding answers were given to model. * Unfortunately, due to lack of sufficient resources it is fine tuned with batch_size=10 and num_seq_len=256. So, if too large context is given model may not get information about last portions. ## Citing & Authors - Datasets : squad_es - Model : hiiamsid/est5-base
[ "## Citing & Authors\n- Datasets : squad_es\n- Model : hiiamsid/est5-base" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #spanish #question generation #qg #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Citing & Authors\n- Datasets : squad_es\n- Model : hiiamsid/est5-base" ]
text2text-generation
transformers
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only Spanish embeddings left. * The original model has 582M parameters, with 237M of them being input and output embeddings. * After shrinking the `sentencepiece` vocabulary from 250K to 25K (top 25K Spanish tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. ## Citing & Authors - Datasets : [cleaned corpora](https://github.com/crscardellino/sbwce) - Model : [google/mt5-base](https://huggingface.co/google/mt5-base) - Reference: [cointegrated/rut5-base](https://huggingface.co/cointegrated/rut5-base)
{"language": ["es"], "license": "mit", "tags": ["spanish"]}
hiiamsid/est5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "spanish", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #t5 #text2text-generation #spanish #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a smaller version of the google/mt5-base model with only Spanish embeddings left. * The original model has 582M parameters, with 237M of them being input and output embeddings. * After shrinking the 'sentencepiece' vocabulary from 250K to 25K (top 25K Spanish tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. ## Citing & Authors - Datasets : cleaned corpora - Model : google/mt5-base - Reference: cointegrated/rut5-base
[ "## Citing & Authors\n- Datasets : cleaned corpora\n- Model : google/mt5-base\n- Reference: cointegrated/rut5-base" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #spanish #es #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Citing & Authors\n- Datasets : cleaned corpora\n- Model : google/mt5-base\n- Reference: cointegrated/rut5-base" ]
text2text-generation
transformers
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only hindi embeddings left. * The original model has 582M parameters, with 237M of them being input and output embeddings. * After shrinking the `sentencepiece` vocabulary from 250K to 25K (top 25K Hindi tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. ## Citing & Authors - Model : [google/mt5-base](https://huggingface.co/google/mt5-base) - Reference: [cointegrated/rut5-base](https://huggingface.co/cointegrated/rut5-base)
{"language": ["hi"], "license": "mit", "tags": ["hindi"]}
hiiamsid/hit5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "hindi", "hi", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hi" ]
TAGS #transformers #pytorch #t5 #text2text-generation #hindi #hi #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a smaller version of the google/mt5-base model with only hindi embeddings left. * The original model has 582M parameters, with 237M of them being input and output embeddings. * After shrinking the 'sentencepiece' vocabulary from 250K to 25K (top 25K Hindi tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. ## Citing & Authors - Model : google/mt5-base - Reference: cointegrated/rut5-base
[ "## Citing & Authors\n- Model : google/mt5-base\n- Reference: cointegrated/rut5-base" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #hindi #hi #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Citing & Authors\n- Model : google/mt5-base\n- Reference: cointegrated/rut5-base" ]
sentence-similarity
sentence-transformers
# hiiamsid/sentence_similarity_hindi This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('hiiamsid/sentence_similarity_hindi') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results ``` cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman 0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483 ``` For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 341 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 137, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> - Model: [setu4993/LaBSE] (https://huggingface.co/setu4993/LaBSE) - Sentence Transformers [Semantic Textual Similarity] (https://www.sbert.net/examples/training/sts/README.html)
{"language": ["hi"], "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
hiiamsid/sentence_similarity_hindi
null
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "hi", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hi" ]
TAGS #sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #hi #endpoints_compatible #region-us
# hiiamsid/sentence_similarity_hindi This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 341 with parameters: Loss: 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors - Model: [setu4993/LaBSE] (URL - Sentence Transformers [Semantic Textual Similarity] (URL
[ "# hiiamsid/sentence_similarity_hindi\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 341 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors\n\n\n- Model: [setu4993/LaBSE]\n(URL\n- Sentence Transformers [Semantic Textual Similarity]\n(URL" ]
[ "TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #hi #endpoints_compatible #region-us \n", "# hiiamsid/sentence_similarity_hindi\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 341 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors\n\n\n- Model: [setu4993/LaBSE]\n(URL\n- Sentence Transformers [Semantic Textual Similarity]\n(URL" ]
sentence-similarity
sentence-transformers
# hiiamsid/sentence_similarity_spanish_es This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ['Mi nombre es Siddhartha', 'Mis amigos me llamaron por mi nombre Siddhartha'] model = SentenceTransformer('hiiamsid/sentence_similarity_spanish_es') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['Mi nombre es Siddhartha', 'Mis amigos me llamaron por mi nombre Siddhartha'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('hiiamsid/sentence_similarity_spanish_es') model = AutoModel.from_pretrained('hiiamsid/sentence_similarity_spanish_es') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results ``` cosine_pearson : 0.8280372842978689 cosine_spearman : 0.8232689765056079 euclidean_pearson : 0.81021993884437 euclidean_spearman : 0.8087904592393836 manhattan_pearson : 0.809645390126291 manhattan_spearman : 0.8077035464970413 dot_pearson : 0.7803662255836028 dot_spearman : 0.7699607641618339 ``` For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hiiamsid/sentence_similarity_spanish_es) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 144, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors - Datasets : [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) - Model : [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) - Sentence Transformers [Semantic Textual Similarity](https://www.sbert.net/examples/training/sts/README.html)
{"language": ["es"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
hiiamsid/sentence_similarity_spanish_es
null
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "es", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "es" ]
TAGS #sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #es #license-apache-2.0 #endpoints_compatible #has_space #region-us
# hiiamsid/sentence_similarity_spanish_es This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 360 with parameters: Loss: 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors - Datasets : stsb_multi_mt - Model : dccuchile/bert-base-spanish-wwm-cased - Sentence Transformers Semantic Textual Similarity
[ "# hiiamsid/sentence_similarity_spanish_es\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors\n- Datasets : stsb_multi_mt\n- Model : dccuchile/bert-base-spanish-wwm-cased\n- Sentence Transformers Semantic Textual Similarity" ]
[ "TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #es #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# hiiamsid/sentence_similarity_spanish_es\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors\n- Datasets : stsb_multi_mt\n- Model : dccuchile/bert-base-spanish-wwm-cased\n- Sentence Transformers Semantic Textual Similarity" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
hiiii23/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
# distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of distilbert-base-uncased on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.12.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.12.1\n- Tokenizers 0.10.3" ]
text-generation
transformers
<br /> <div align="center"> <img src="https://raw.githubusercontent.com/himanshu-dutta/pycoder/master/docs/pycoder-logo-p.png"> <br/> <img alt="Made With Python" src="http://ForTheBadge.com/images/badges/made-with-python.svg" height=28 style="display:inline; height:28px;" /> <img alt="Medium" src="https://img.shields.io/badge/Medium-12100E?style=for-the-badge&logo=medium&logoColor=white" height=28 style="display:inline; height:28px;"/> <a href="https://wandb.ai/himanshu-dutta/pycoder"> <img alt="WandB Dashboard" src="https://raw.githubusercontent.com/wandb/assets/04cfa58cc59fb7807e0423187a18db0c7430bab5/wandb-github-badge-28.svg" height=28 style="display:inline; height:28px;" /> </a> [![PyPI version fury.io](https://badge.fury.io/py/pycoder.svg)](https://pypi.org/project/pycoder/) </div> <div align="justify"> `PyCoder` is a tool to generate python code out of a few given topics and a description. It uses GPT-2 language model as its engine. Pycoder poses writing Python code as a conditional-Causal Language Modelling(c-CLM). It has been trained on millions of lines of Python code written by all of us. At the current stage and state of training, it produces sensible code with few lines of description, but the scope of improvement for the model is limitless. Pycoder has been developed as a Command-Line tool (CLI), an API endpoint, as well as a python package (yet to be deployed to PyPI). This repository acts as a framework for anyone who either wants to try to build Pycoder from scratch or turn Pycoder into maybe a `CPPCoder` or `JSCoder` 😃. A blog post about the development of the project will be released soon. To use `Pycoder` as a CLI utility, clone the repository as normal, and install the package with: ```console foo@bar:❯ pip install pycoder ``` After this the package could be verified and accessed as either a native CLI tool or a python package with: ```console foo@bar:❯ python -m pycoder --version Or directly as: foo@bar:❯ pycoder --version ``` On installation the CLI can be used directly, such as: ```console foo@bar:❯ pycoder -t pytorch -t torch -d "a trainer class to train vision model" -ml 120 ``` The API endpoint is deployed using FastAPI. Once all the requirements have been installed for the project, the API can be accessed with: ```console foo@bar:❯ pycoder --endpoint PORT_NUMBER Or foo@bar:❯ pycoder -e PORT_NUMBER ``` </div> ## Tech Stack <div align="center"> <img alt="Python" src="https://img.shields.io/badge/python-%2314354C.svg?style=for-the-badge&logo=python&logoColor=white" style="display:inline;" /> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white" style="display:inline;" /> <img alt="Transformers" src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" height=28 width=120 style="display:inline; background-color:white; height:28px; width:120px"/> <img alt="Docker" src="https://img.shields.io/badge/docker-%230db7ed.svg?style=for-the-badge&logo=docker&logoColor=white" style="display:inline;" /> <img src="https://fastapi.tiangolo.com/img/logo-margin/logo-teal.png" alt="FastAPI" height=28 style="display:inline; background-color:black; height:28px;" /> <img src="https://typer.tiangolo.com/img/logo-margin/logo-margin-vector.svg" height=28 style="display:inline; background-color:teal; height:28px;" /> </div> ## Tested Platforms <div align="center"> <img alt="Linux" src="https://img.shields.io/badge/Linux-FCC624?style=for-the-badge&logo=linux&logoColor=black" style="display:inline;" /> <img alt="Windows 10" src="https://img.shields.io/badge/Windows-0078D6?style=for-the-badge&logo=windows&logoColor=white" style="display:inline;" /> </div> ## BibTeX If you want to cite the framework feel free to use this: ```bibtex @article{dutta2021pycoder, title={Pycoder}, author={Dutta, H}, journal={GitHub. Note: https://github.com/himanshu-dutta/pycoder}, year={2021} } ``` <hr /> <div align="center"> <img alt="MIT License" src="https://img.shields.io/github/license/himanshu-dutta/pycoder?style=for-the-badge&logo=appveyor" style="display:inline;" /> <img src="https://img.shields.io/badge/Copyright-Himanshu_Dutta-2ea44f?style=for-the-badge&logo=appveyor" style="display:inline;" /> </div>
{}
himanshu-dutta/pycoder-gpt2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<br /> <div align="center"> <img src="URL <br/> <img alt="Made With Python" src="URL height=28 style="display:inline; height:28px;" /> <img alt="Medium" src="URL height=28 style="display:inline; height:28px;"/> <a href="URL <img alt="WandB Dashboard" src="URL height=28 style="display:inline; height:28px;" /> </a> ![PyPI version URL](URL </div> <div align="justify"> 'PyCoder' is a tool to generate python code out of a few given topics and a description. It uses GPT-2 language model as its engine. Pycoder poses writing Python code as a conditional-Causal Language Modelling(c-CLM). It has been trained on millions of lines of Python code written by all of us. At the current stage and state of training, it produces sensible code with few lines of description, but the scope of improvement for the model is limitless. Pycoder has been developed as a Command-Line tool (CLI), an API endpoint, as well as a python package (yet to be deployed to PyPI). This repository acts as a framework for anyone who either wants to try to build Pycoder from scratch or turn Pycoder into maybe a 'CPPCoder' or 'JSCoder' . A blog post about the development of the project will be released soon. To use 'Pycoder' as a CLI utility, clone the repository as normal, and install the package with: After this the package could be verified and accessed as either a native CLI tool or a python package with: On installation the CLI can be used directly, such as: The API endpoint is deployed using FastAPI. Once all the requirements have been installed for the project, the API can be accessed with: </div> ## Tech Stack <div align="center"> <img alt="Python" src="URL style="display:inline;" /> <img alt="PyTorch" src="URL style="display:inline;" /> <img alt="Transformers" src="URL height=28 width=120 style="display:inline; background-color:white; height:28px; width:120px"/> <img alt="Docker" src="URL style="display:inline;" /> <img src="URL alt="FastAPI" height=28 style="display:inline; background-color:black; height:28px;" /> <img src="URL height=28 style="display:inline; background-color:teal; height:28px;" /> </div> ## Tested Platforms <div align="center"> <img alt="Linux" src="URL style="display:inline;" /> <img alt="Windows 10" src="URL style="display:inline;" /> </div> ## BibTeX If you want to cite the framework feel free to use this: <hr /> <div align="center"> <img alt="MIT License" src="URL style="display:inline;" /> <img src="URL style="display:inline;" /> </div>
[ "## Tech Stack\n<div align=\"center\">\n<img alt=\"Python\" src=\"URL style=\"display:inline;\" />\n<img alt=\"PyTorch\" src=\"URL style=\"display:inline;\" />\n<img alt=\"Transformers\" src=\"URL height=28 width=120 style=\"display:inline; background-color:white; height:28px; width:120px\"/>\n<img alt=\"Docker\" src=\"URL style=\"display:inline;\" />\n<img src=\"URL alt=\"FastAPI\" height=28 style=\"display:inline; background-color:black; height:28px;\" /> \n<img src=\"URL height=28 style=\"display:inline; background-color:teal; height:28px;\" />\n</div>", "## Tested Platforms\n<div align=\"center\">\n<img alt=\"Linux\" src=\"URL style=\"display:inline;\" />\n<img alt=\"Windows 10\" src=\"URL style=\"display:inline;\" />\n</div>", "## BibTeX\nIf you want to cite the framework feel free to use this:\n\n\n<hr />\n\n<div align=\"center\">\n<img alt=\"MIT License\" src=\"URL style=\"display:inline;\" /> \n<img src=\"URL style=\"display:inline;\" />\n</div>" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Tech Stack\n<div align=\"center\">\n<img alt=\"Python\" src=\"URL style=\"display:inline;\" />\n<img alt=\"PyTorch\" src=\"URL style=\"display:inline;\" />\n<img alt=\"Transformers\" src=\"URL height=28 width=120 style=\"display:inline; background-color:white; height:28px; width:120px\"/>\n<img alt=\"Docker\" src=\"URL style=\"display:inline;\" />\n<img src=\"URL alt=\"FastAPI\" height=28 style=\"display:inline; background-color:black; height:28px;\" /> \n<img src=\"URL height=28 style=\"display:inline; background-color:teal; height:28px;\" />\n</div>", "## Tested Platforms\n<div align=\"center\">\n<img alt=\"Linux\" src=\"URL style=\"display:inline;\" />\n<img alt=\"Windows 10\" src=\"URL style=\"display:inline;\" />\n</div>", "## BibTeX\nIf you want to cite the framework feel free to use this:\n\n\n<hr />\n\n<div align=\"center\">\n<img alt=\"MIT License\" src=\"URL style=\"display:inline;\" /> \n<img src=\"URL style=\"display:inline;\" />\n</div>" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3780 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.08 | 10 | 14.0985 | 1.0 | | No log | 0.16 | 20 | 13.8638 | 1.0004 | | No log | 0.24 | 30 | 13.5135 | 1.0023 | | No log | 0.32 | 40 | 12.8708 | 1.0002 | | No log | 0.4 | 50 | 11.6927 | 1.0 | | No log | 0.48 | 60 | 10.2733 | 1.0 | | No log | 0.56 | 70 | 8.1396 | 1.0 | | No log | 0.64 | 80 | 5.3503 | 1.0 | | No log | 0.72 | 90 | 3.7975 | 1.0 | | No log | 0.8 | 100 | 3.4275 | 1.0 | | No log | 0.88 | 110 | 3.3596 | 1.0 | | No log | 0.96 | 120 | 3.3780 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
hiraki/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.3780 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
text-generation
transformers
GPT-2 chatbot - talk to Ray Smuckles
{"tags": ["conversational"]}
hireddivas/DialoGPT-small-ray
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 chatbot - talk to Ray Smuckles
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#GPT-2 model trained on Dana Scully's dialog.
{"tags": ["conversational"]}
hireddivas/DialoGPT-small-scully
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#GPT-2 model trained on Dana Scully's dialog.
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
GPT-2 chatbot - talk to Fox Mulder
{"tags": ["conversational"]}
hireddivas/dialoGPT-small-mulder
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 chatbot - talk to Fox Mulder
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
GPT-2 model trained on Phil from Eastenders
{"tags": ["conversational"]}
hireddivas/dialoGPT-small-phil
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 model trained on Phil from Eastenders
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
GPT-2 chatbot - talk to Sonic
{"tags": ["conversational"]}
hireddivas/dialoGPT-small-sonic
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 chatbot - talk to Sonic
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
fill-mask
transformers
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831) This pretrained model is almost the same as [cl-tohoku/bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2) but do not need `fugashi` or `unidic_lite`. The only difference is in `word_tokenzer_type` property (specify `basic` instead of `mecab`) in `tokenizer_config.json`.
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": ["wikipedia"]}
hiroshi-matsuda-rit/bert-base-japanese-basic-char-v2
null
[ "transformers", "pytorch", "bert", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831) This pretrained model is almost the same as cl-tohoku/bert-base-japanese-char-v2 but do not need 'fugashi' or 'unidic_lite'. The only difference is in 'word_tokenzer_type' property (specify 'basic' instead of 'mecab') in 'tokenizer_config.json'.
[ "# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis pretrained model is almost the same as cl-tohoku/bert-base-japanese-char-v2 but do not need 'fugashi' or 'unidic_lite'.\nThe only difference is in 'word_tokenzer_type' property (specify 'basic' instead of 'mecab') in 'tokenizer_config.json'." ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)\n\nThis pretrained model is almost the same as cl-tohoku/bert-base-japanese-char-v2 but do not need 'fugashi' or 'unidic_lite'.\nThe only difference is in 'word_tokenzer_type' property (specify 'basic' instead of 'mecab') in 'tokenizer_config.json'." ]
token-classification
spacy
Japanese transformer pipeline (bert-base). Components: transformer, parser, ner. | Feature | Description | | --- | --- | | **Name** | `ja_gsd_bert_wwm_unidic_lite` | | **Version** | `3.1.1` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `transformer`, `parser`, `ner` | | **Components** | `transformer`, `parser`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD_Japanese-GSD](https://github.com/UniversalDependencies/UD_Japanese-GSD)<br />[UD_Japanese-GSD r2.8+NE](https://github.com/megagonlabs/UD_Japanese-GSD/releases/tag/r2.8-NE)<br />[SudachiDict_core](https://github.com/WorksApplications/SudachiDict)<br />[cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking)<br />[unidic_lite](https://github.com/polm/unidic-lite) | | **License** | `CC BY-SA 4.0` | | **Author** | [Megagon Labs Tokyo.](https://github.com/megagonlabs/UD_japanese_GSD) | ### Label Scheme <details> <summary>View label scheme (45 labels for 2 components)</summary> | Component | Labels | | --- | --- | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `aux`, `case`, `cc`, `ccomp`, `compound`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `MOVEMENT`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PET_NAME`, `PHONE`, `PRODUCT`, `QUANTITY`, `TIME`, `TITLE_AFFIX`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `DEP_UAS` | 93.68 | | `DEP_LAS` | 92.61 | | `SENTS_P` | 92.02 | | `SENTS_R` | 95.46 | | `SENTS_F` | 93.71 | | `ENTS_F` | 84.04 | | `ENTS_P` | 84.96 | | `ENTS_R` | 83.14 | | `TAG_ACC` | 0.00 | | `TRANSFORMER_LOSS` | 28861.67 | | `PARSER_LOSS` | 1306248.63 | | `NER_LOSS` | 13993.36 |
{"language": ["ja"], "license": "CC-BY-SA-4.0", "tags": ["spacy", "token-classification"]}
hiroshi-matsuda-rit/ja_gsd_bert_wwm_unidic_lite
null
[ "spacy", "token-classification", "ja", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ja" ]
TAGS #spacy #token-classification #ja #model-index #region-us
Japanese transformer pipeline (bert-base). Components: transformer, parser, ner. ### Label Scheme View label scheme (45 labels for 2 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (45 labels for 2 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ja #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (45 labels for 2 components)", "### Accuracy" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4600 - Matthews Correlation: 0.5291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.4715 | 0.4678 | | 0.3493 | 2.0 | 1070 | 0.4600 | 0.5291 | | 0.2393 | 3.0 | 1605 | 0.6018 | 0.5219 | | 0.1714 | 4.0 | 2140 | 0.7228 | 0.5245 | | 0.1289 | 5.0 | 2675 | 0.8154 | 0.5279 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.5.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5290966132843783, "name": "Matthews Correlation"}]}]}]}
histinct7002/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.4600 * Matthews Correlation: 0.5291 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.5.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.5.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.5.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0727 - Precision: 0.9334 - Recall: 0.9398 - F1: 0.9366 - Accuracy: 0.9845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0271 | 1.0 | 878 | 0.0656 | 0.9339 | 0.9339 | 0.9339 | 0.9840 | | 0.0136 | 2.0 | 1756 | 0.0703 | 0.9268 | 0.9380 | 0.9324 | 0.9838 | | 0.008 | 3.0 | 2634 | 0.0727 | 0.9334 | 0.9398 | 0.9366 | 0.9845 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9334444444444444, "name": "Precision"}, {"type": "recall", "value": 0.9398142969012194, "name": "Recall"}, {"type": "f1", "value": 0.9366185406098445, "name": "F1"}, {"type": "accuracy", "value": 0.9845425516704529, "name": "Accuracy"}]}]}]}
histinct7002/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0727 * Precision: 0.9334 * Recall: 0.9398 * F1: 0.9366 * Accuracy: 0.9845 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
Note: this model was superceded by the [`load_in_8bit=True` feature in transformers](https://github.com/huggingface/transformers/pull/17901) by Younes Belkada and Tim Dettmers. Please see [this usage example](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=W8tQtyjp75O). This legacy model was built for [transformers v4.15.0](https://github.com/huggingface/transformers/releases/tag/v4.15.0) and pytorch 1.11. Newer versions could work, but are not supported. ### Quantized EleutherAI/gpt-j-6b with 8-bit weights This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Where can I train for free? You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance. ### Can I use this technique with other models? The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["The Pile"]}
hivemind/gpt-j-6B-8bit
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "en", "arxiv:2106.09685", "arxiv:2110.02861", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.09685", "2110.02861" ]
[ "en" ]
TAGS #transformers #pytorch #gptj #text-generation #causal-lm #en #arxiv-2106.09685 #arxiv-2110.02861 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Note: this model was superceded by the 'load_in_8bit=True' feature in transformers by Younes Belkada and Tim Dettmers. Please see this usage example. This legacy model was built for transformers v4.15.0 and pytorch 1.11. Newer versions could work, but are not supported. ### Quantized EleutherAI/gpt-j-6b with 8-bit weights This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti). Here's how to run it: ![colab](URL __The original GPT-J__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it on TPU or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with LoRA and 8-bit Adam In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). !img __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. This notebook measures wikitext test perplexity and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use 8-bit only for storage, and all computations are performed in float16 or float32. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from the LoRA paper. On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Where can I train for free? You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: kaggle, aws sagemaker or paperspace. For intance, this is the same notebook running in kaggle using a more powerful P100 instance. ### Can I use this technique with other models? The model was converted using this notebook. It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
[ "### Quantized EleutherAI/gpt-j-6b with 8-bit weights\n\nThis is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti).\n\nHere's how to run it: ![colab](URL\n\n__The original GPT-J__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it on TPU or CPUs, but fine-tuning is way more expensive.\n\nHere, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:\n- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication\n- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training\n- scalable fine-tuning with LoRA and 8-bit Adam\n\nIn other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).\n\n!img\n\n\n__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. This notebook measures wikitext test perplexity and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.\n\nOur code differs from other 8-bit methods in that we use 8-bit only for storage, and all computations are performed in float16 or float32. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.\n\n\n__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.", "### How should I fine-tune the model?\n\nWe recommend starting with the original hyperparameters from the LoRA paper.\nOn top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.\nAs a result, the larger batch size you can fit, the more efficient you will train.", "### Where can I train for free?\n\nYou can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: kaggle, aws sagemaker or paperspace. For intance, this is the same notebook running in kaggle using a more powerful P100 instance.", "### Can I use this technique with other models?\n\nThe model was converted using this notebook. It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #en #arxiv-2106.09685 #arxiv-2110.02861 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Quantized EleutherAI/gpt-j-6b with 8-bit weights\n\nThis is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti).\n\nHere's how to run it: ![colab](URL\n\n__The original GPT-J__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it on TPU or CPUs, but fine-tuning is way more expensive.\n\nHere, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:\n- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication\n- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training\n- scalable fine-tuning with LoRA and 8-bit Adam\n\nIn other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).\n\n!img\n\n\n__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. This notebook measures wikitext test perplexity and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.\n\nOur code differs from other 8-bit methods in that we use 8-bit only for storage, and all computations are performed in float16 or float32. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.\n\n\n__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.", "### How should I fine-tune the model?\n\nWe recommend starting with the original hyperparameters from the LoRA paper.\nOn top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.\nAs a result, the larger batch size you can fit, the more efficient you will train.", "### Where can I train for free?\n\nYou can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: kaggle, aws sagemaker or paperspace. For intance, this is the same notebook running in kaggle using a more powerful P100 instance.", "### Can I use this technique with other models?\n\nThe model was converted using this notebook. It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters." ]
null
transformers
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2. Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
{}
hkunlp/T5_large_prefix_all_tasks_2upsample2
null
[ "transformers", "pytorch", "t5", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #t5 #endpoints_compatible #text-generation-inference #region-us
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2. Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
[]
[ "TAGS\n#transformers #pytorch #t5 #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
Convert from model .pt to transformer Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h Bash: ```bash pip install transformers[sentencepiece] pip install fairseq -U git clone https://github.com/huggingface/transformers.git cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt --dict_path ./dict/dict.ltr.txt --not_finetuned ``` # install and upload model ``` curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash git lfs install sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo ls cd wav2vec-demo/ git status git add . git commit -m "First model version" git config --global user.email [yourname] git config --global user.name [yourpass] git commit -m "First model version" git push ```
{}
hoangbinhmta99/wav2vec-demo
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
Convert from model .pt to transformer Link: URL Bash: # install and upload model
[ "# install and upload model" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n", "# install and upload model" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9247 - Recall: 0.9343 - F1: 0.9295 - Accuracy: 0.9854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2082 | 1.0 | 753 | 0.0657 | 0.8996 | 0.9256 | 0.9125 | 0.9821 | | 0.0428 | 2.0 | 1506 | 0.0595 | 0.9268 | 0.9343 | 0.9305 | 0.9848 | | 0.0268 | 3.0 | 2259 | 0.0604 | 0.9247 | 0.9343 | 0.9295 | 0.9854 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": [], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9853695435592783}}]}]}
hoanhkhoa/bert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-ner =============================== This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0604 * Precision: 0.9247 * Recall: 0.9343 * F1: 0.9295 * Accuracy: 0.9854 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0381 - Precision: 0.9469 - Recall: 0.9530 - F1: 0.9500 - Accuracy: 0.9915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1328 | 1.0 | 753 | 0.0492 | 0.9143 | 0.9308 | 0.9225 | 0.9884 | | 0.0301 | 2.0 | 1506 | 0.0378 | 0.9421 | 0.9474 | 0.9448 | 0.9910 | | 0.0185 | 3.0 | 2259 | 0.0381 | 0.9469 | 0.9530 | 0.9500 | 0.9915 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": [], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "roberta-base-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9914674251177673}}]}]}
hoanhkhoa/roberta-base-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner ========================== This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0381 * Precision: 0.9469 * Recall: 0.9530 * F1: 0.9500 * Accuracy: 0.9915 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.7004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.316 | 1.0 | 2363 | 2.0234 | | 2.0437 | 2.0 | 4726 | 1.7881 | | 1.9058 | 3.0 | 7089 | 1.7004 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
hogger32/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-squad ======================================= This model is a fine-tuned version of distilbert-base-uncased on the squad\_v2 dataset. It achieves the following results on the evaluation set: * Loss: 1.7004 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmRoberta-for-VietnameseQA This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the UIT-Viquad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.8315 ## Model description Fine-tuned by Honganh Nguyen (FPTU AI Club). ## Intended uses & limitations More information needed ## Training and evaluation data Credits to Viet Nguyen (FPTU AI Club) for the training and evaluation data. Training data: https://github.com/vietnguyen012/QA_viuit/blob/main/train.json Evaluation data: https://github.com/vietnguyen012/QA_viuit/blob/main/trial/trial.json ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5701 | 1.0 | 2534 | 1.2220 | | 1.2942 | 2.0 | 5068 | 0.9698 | | 1.0693 | 3.0 | 7602 | 0.8315 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "xlmRoberta-for-VietnameseQA", "results": []}]}
hogger32/xlmRoberta-for-VietnameseQA
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #dataset-squad_v2 #license-mit #endpoints_compatible #region-us
xlmRoberta-for-VietnameseQA =========================== This model is a fine-tuned version of xlm-roberta-base on the UIT-Viquad\_v2 dataset. It achieves the following results on the evaluation set: * Loss: 0.8315 Model description ----------------- Fine-tuned by Honganh Nguyen (FPTU AI Club). Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- Credits to Viet Nguyen (FPTU AI Club) for the training and evaluation data. Training data: URL Evaluation data: URL Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #question-answering #generated_from_trainer #dataset-squad_v2 #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Zhongli, but not Zhongli
{"tags": ["conversational"]}
honguyenminh/old-zhongli
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Zhongli, but not Zhongli
[ "# Zhongli, but not Zhongli" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Zhongli, but not Zhongli" ]
null
null
dd
{}
hooni/bert-fine-tuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
dd
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
#Joey DialoGPT Model
{"tags": ["conversational"]}
houssaineamzil/DialoGPT-small-joey
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Joey DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4241 - Wer: 0.3381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7749 | 4.0 | 500 | 2.0639 | 1.0018 | | 0.9252 | 8.0 | 1000 | 0.4853 | 0.4821 | | 0.3076 | 12.0 | 1500 | 0.4507 | 0.4044 | | 0.1732 | 16.0 | 2000 | 0.4315 | 0.3688 | | 0.1269 | 20.0 | 2500 | 0.4481 | 0.3559 | | 0.1087 | 24.0 | 3000 | 0.4354 | 0.3464 | | 0.0832 | 28.0 | 3500 | 0.4241 | 0.3381 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
hrdipto/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4241 * Wer: 0.3381 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-bangla-command-generated-data-finetune This model is a fine-tuned version of [hrdipto/wav2vec2-xls-r-300m-bangla-command-data](https://huggingface.co/hrdipto/wav2vec2-xls-r-300m-bangla-command-data) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0099 - eval_wer: 0.0208 - eval_runtime: 2.5526 - eval_samples_per_second: 75.217 - eval_steps_per_second: 9.402 - epoch: 71.43 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-300m-bangla-command-generated-data-finetune", "results": []}]}
hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us
# wav2vec2-xls-r-300m-bangla-command-generated-data-finetune This model is a fine-tuned version of hrdipto/wav2vec2-xls-r-300m-bangla-command-data on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0099 - eval_wer: 0.0208 - eval_runtime: 2.5526 - eval_samples_per_second: 75.217 - eval_steps_per_second: 9.402 - epoch: 71.43 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# wav2vec2-xls-r-300m-bangla-command-generated-data-finetune\n\nThis model is a fine-tuned version of hrdipto/wav2vec2-xls-r-300m-bangla-command-data on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0099\n- eval_wer: 0.0208\n- eval_runtime: 2.5526\n- eval_samples_per_second: 75.217\n- eval_steps_per_second: 9.402\n- epoch: 71.43\n- step: 2000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 100\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-300m-bangla-command-generated-data-finetune\n\nThis model is a fine-tuned version of hrdipto/wav2vec2-xls-r-300m-bangla-command-data on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0099\n- eval_wer: 0.0208\n- eval_runtime: 2.5526\n- eval_samples_per_second: 75.217\n- eval_steps_per_second: 9.402\n- epoch: 71.43\n- step: 2000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 100\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru-word-level This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0504 - Wer: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 23.217 | 23.81 | 500 | 1.3437 | 0.6859 | | 1.1742 | 47.62 | 1000 | 1.0397 | 0.6859 | | 1.0339 | 71.43 | 1500 | 1.0155 | 0.6859 | | 0.9909 | 95.24 | 2000 | 1.0504 | 0.6859 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-tf-left-right-shuru-word-level", "results": []}]}
hrdipto/wav2vec2-xls-r-tf-left-right-shuru-word-level
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-xls-r-tf-left-right-shuru-word-level ============================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.0504 * Wer: 0.6859 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 100 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0921 - Wer: 1.2628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 | | 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 | | 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 | | 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-tf-left-right-shuru", "results": []}]}
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-xls-r-tf-left-right-shuru ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0921 * Wer: 1.2628 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 100 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-trainer This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0090 - eval_wer: 0.0037 - eval_runtime: 11.2686 - eval_samples_per_second: 71.703 - eval_steps_per_second: 8.963 - epoch: 21.05 - step: 4000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-tf-left-right-trainer", "results": []}]}
hrdipto/wav2vec2-xls-r-tf-left-right-trainer
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-xls-r-tf-left-right-trainer This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0090 - eval_wer: 0.0037 - eval_runtime: 11.2686 - eval_samples_per_second: 71.703 - eval_steps_per_second: 8.963 - epoch: 21.05 - step: 4000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
[ "# wav2vec2-xls-r-tf-left-right-trainer\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0090\n- eval_wer: 0.0037\n- eval_runtime: 11.2686\n- eval_samples_per_second: 71.703\n- eval_steps_per_second: 8.963\n- epoch: 21.05\n- step: 4000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-tf-left-right-trainer\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0090\n- eval_wer: 0.0037\n- eval_runtime: 11.2686\n- eval_samples_per_second: 71.703\n- eval_steps_per_second: 8.963\n- epoch: 21.05\n- step: 4000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-timit-tokenizer-base This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0828 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.3134 | 4.03 | 500 | 3.0814 | 1.0 | | 2.9668 | 8.06 | 1000 | 3.0437 | 1.0 | | 2.9604 | 12.1 | 1500 | 3.0337 | 1.0 | | 2.9619 | 16.13 | 2000 | 3.0487 | 1.0 | | 2.9588 | 20.16 | 2500 | 3.0859 | 1.0 | | 2.957 | 24.19 | 3000 | 3.0921 | 1.0 | | 2.9555 | 28.22 | 3500 | 3.0828 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-timit-tokenizer-base", "results": []}]}
hrdipto/wav2vec2-xls-r-timit-tokenizer-base
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-xls-r-timit-tokenizer-base =================================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.0828 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-timit-tokenizer This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4285 - Wer: 0.3662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.1571 | 4.03 | 500 | 0.5235 | 0.5098 | | 0.2001 | 8.06 | 1000 | 0.4172 | 0.4375 | | 0.0968 | 12.1 | 1500 | 0.4562 | 0.4016 | | 0.0607 | 16.13 | 2000 | 0.4640 | 0.4050 | | 0.0409 | 20.16 | 2500 | 0.4688 | 0.3914 | | 0.0273 | 24.19 | 3000 | 0.4414 | 0.3763 | | 0.0181 | 28.22 | 3500 | 0.4285 | 0.3662 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-timit-tokenizer", "results": []}]}
hrdipto/wav2vec2-xls-r-timit-tokenizer
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-xls-r-timit-tokenizer ============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4285 * Wer: 0.3662 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
null
null
# Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio` or `streamlit` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list.
{"title": "First Order Motion Model", "emoji": "\ud83d\udc22", "colorFrom": "blue", "colorTo": "yellow", "sdk": "gradio", "app_file": "app.py", "pinned": false}
hrushikute/DanceOnTune
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
# Configuration 'title': _string_ Display title for the Space 'emoji': _string_ Space emoji (emoji-only character allowed) 'colorFrom': _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) 'colorTo': _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) 'sdk': _string_ Can be either 'gradio' or 'streamlit' 'sdk_version' : _string_ Only applicable for 'streamlit' SDK. See doc for more info on supported versions. 'app_file': _string_ Path to your main application file (which contains either 'gradio' or 'streamlit' Python code). Path is relative to the root of the repository. 'pinned': _boolean_ Whether the Space stays on top of your list.
[ "# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list." ]
[ "TAGS\n#region-us \n", "# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list." ]
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
hrv/DialoGPT-small-rick-morty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick and Morty DialoGPT Model
[ "# Rick and Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick and Morty DialoGPT Model" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4125 - Wer: 0.3607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2018 | 7.94 | 500 | 1.3144 | 0.8508 | | 0.4671 | 15.87 | 1000 | 0.4737 | 0.4160 | | 0.1375 | 23.81 | 1500 | 0.4125 | 0.3607 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
hs788/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4125 * Wer: 0.3607 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 64 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
null
null
Hi, this is Taiwan_House_Prediction.
{}
huang0624/Taiwan_House_Prediction
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Hi, this is Taiwan_House_Prediction.
[]
[ "TAGS\n#region-us \n" ]
null
transformers
## DynaBERT: Dynamic BERT with Adaptive Width and Depth * DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and the subnetworks of it have competitive performances as other similar-sized compressed models. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth using knowledge distillation. * This code is modified based on the repository developed by Hugging Face: [Transformers v2.1.1](https://github.com/huggingface/transformers/tree/v2.1.1), and is released in [GitHub](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/DynaBERT). ### Reference Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu. [DynaBERT: Dynamic BERT with Adaptive Width and Depth](https://arxiv.org/abs/2004.04037). ``` @inproceedings{hou2020dynabert, title = {DynaBERT: Dynamic BERT with Adaptive Width and Depth}, author = {Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu}, booktitle = {Advances in Neural Information Processing Systems}, year = {2020} } ```
{}
huawei-noah/DynaBERT_MNLI
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:2004.04037", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04037" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-2004.04037 #endpoints_compatible #region-us
## DynaBERT: Dynamic BERT with Adaptive Width and Depth * DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and the subnetworks of it have competitive performances as other similar-sized compressed models. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth using knowledge distillation. * This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub. ### Reference Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu. DynaBERT: Dynamic BERT with Adaptive Width and Depth.
[ "## DynaBERT: Dynamic BERT with Adaptive Width and Depth\n\n* DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and \nthe subnetworks of it have competitive performances as other similar-sized compressed models.\nThe training process of DynaBERT includes first training a width-adaptive BERT and then \nallowing both adaptive width and depth using knowledge distillation. \n\n* This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub.", "### Reference\nLu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu.\nDynaBERT: Dynamic BERT with Adaptive Width and Depth." ]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-2004.04037 #endpoints_compatible #region-us \n", "## DynaBERT: Dynamic BERT with Adaptive Width and Depth\n\n* DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and \nthe subnetworks of it have competitive performances as other similar-sized compressed models.\nThe training process of DynaBERT includes first training a width-adaptive BERT and then \nallowing both adaptive width and depth using knowledge distillation. \n\n* This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub.", "### Reference\nLu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu.\nDynaBERT: Dynamic BERT with Adaptive Width and Depth." ]
null
transformers
## DynaBERT: Dynamic BERT with Adaptive Width and Depth * DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and the subnetworks of it have competitive performances as other similar-sized compressed models. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth using knowledge distillation. * This code is modified based on the repository developed by Hugging Face: [Transformers v2.1.1](https://github.com/huggingface/transformers/tree/v2.1.1), and is released in [GitHub](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/DynaBERT). ### Reference Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu. [DynaBERT: Dynamic BERT with Adaptive Width and Depth](https://arxiv.org/abs/2004.04037). ``` @inproceedings{hou2020dynabert, title = {DynaBERT: Dynamic BERT with Adaptive Width and Depth}, author = {Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu}, booktitle = {Advances in Neural Information Processing Systems}, year = {2020} } ```
{}
huawei-noah/DynaBERT_SST-2
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:2004.04037", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04037" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-2004.04037 #endpoints_compatible #region-us
## DynaBERT: Dynamic BERT with Adaptive Width and Depth * DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and the subnetworks of it have competitive performances as other similar-sized compressed models. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth using knowledge distillation. * This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub. ### Reference Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu. DynaBERT: Dynamic BERT with Adaptive Width and Depth.
[ "## DynaBERT: Dynamic BERT with Adaptive Width and Depth\n\n* DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and \nthe subnetworks of it have competitive performances as other similar-sized compressed models.\nThe training process of DynaBERT includes first training a width-adaptive BERT and then \nallowing both adaptive width and depth using knowledge distillation. \n\n* This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub.", "### Reference\nLu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu.\nDynaBERT: Dynamic BERT with Adaptive Width and Depth." ]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-2004.04037 #endpoints_compatible #region-us \n", "## DynaBERT: Dynamic BERT with Adaptive Width and Depth\n\n* DynaBERT can flexibly adjust the size and latency by selecting adaptive width and depth, and \nthe subnetworks of it have competitive performances as other similar-sized compressed models.\nThe training process of DynaBERT includes first training a width-adaptive BERT and then \nallowing both adaptive width and depth using knowledge distillation. \n\n* This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1, and is released in GitHub.", "### Reference\nLu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu.\nDynaBERT: Dynamic BERT with Adaptive Width and Depth." ]
null
null
# Overview <p align="center"> <img src="https://avatars.githubusercontent.com/u/12619994?s=200&v=4" width="150"> </p> <!-- -------------------------------------------------------------------------------- --> JABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model. JABER obtained rank one on [ALUE leaderboard](https://www.alue.org/leaderboard) at `01/09/2021`. This model is **only compatible** with the code in [this github repo](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/JABER-PyTorch) (not supported by the [Transformers](https://github.com/huggingface/transformers) library) ## Citation Please cite the following [paper](https://arxiv.org/abs/2112.04329) when using our code and model: ``` bibtex @misc{ghaddar2021jaber, title={JABER: Junior Arabic BERt}, author={Abbas Ghaddar and Yimeng Wu and Ahmad Rashid and Khalil Bibi and Mehdi Rezagholizadeh and Chao Xing and Yasheng Wang and Duan Xinyu and Zhefeng Wang and Baoxing Huai and Xin Jiang and Qun Liu and Philippe Langlais}, year={2021}, eprint={2112.04329}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
huawei-noah/JABER
null
[ "pytorch", "arxiv:2112.04329", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2112.04329" ]
[]
TAGS #pytorch #arxiv-2112.04329 #region-us
# Overview <p align="center"> <img src="URL width="150"> </p> JABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model. JABER obtained rank one on ALUE leaderboard at '01/09/2021'. This model is only compatible with the code in this github repo (not supported by the Transformers library) Please cite the following paper when using our code and model:
[ "# Overview\n\n<p align=\"center\">\n <img src=\"URL width=\"150\">\n</p>\n\n\n\nJABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model. \nJABER obtained rank one on ALUE leaderboard at '01/09/2021'. \nThis model is only compatible with the code in this github repo (not supported by the Transformers library)\n \nPlease cite the following paper when using our code and model:" ]
[ "TAGS\n#pytorch #arxiv-2112.04329 #region-us \n", "# Overview\n\n<p align=\"center\">\n <img src=\"URL width=\"150\">\n</p>\n\n\n\nJABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model. \nJABER obtained rank one on ALUE leaderboard at '01/09/2021'. \nThis model is only compatible with the code in this github repo (not supported by the Transformers library)\n \nPlease cite the following paper when using our code and model:" ]
null
transformers
TinyBERT: Distilling BERT for Natural Language Understanding ======== TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: [TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/abs/1909.10351) Citation ======== If you find TinyBERT useful in your research, please cite the following paper: ``` @article{jiao2019tinybert, title={Tinybert: Distilling bert for natural language understanding}, author={Jiao, Xiaoqi and Yin, Yichun and Shang, Lifeng and Jiang, Xin and Chen, Xiao and Li, Linlin and Wang, Fang and Liu, Qun}, journal={arXiv preprint arXiv:1909.10351}, year={2019} } ```
{}
huawei-noah/TinyBERT_General_4L_312D
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:1909.10351", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1909.10351" ]
[]
TAGS #transformers #pytorch #jax #bert #arxiv-1909.10351 #endpoints_compatible #has_space #region-us
TinyBERT: Distilling BERT for Natural Language Understanding ======== TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: TinyBERT: Distilling BERT for Natural Language Understanding Citation ======== If you find TinyBERT useful in your research, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #jax #bert #arxiv-1909.10351 #endpoints_compatible #has_space #region-us \n" ]
null
null
This is an Audacity wrapper for the model, forked from the repository `groadabike/ConvTasNet_DAMP-VSEP_enhboth`, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied directly from `groadabike/ConvTasNet_DAMP-VSEP_enhboth`: ### Description: This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset. ### Training config: ```yaml data: channels: 1 n_src: 2 root_path: data sample_rate: 16000 samples_per_track: 10 segment: 3.0 task: enh_both filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet help: None masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0003 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 12 early_stop: True epochs: 50 half_lr: True num_workers: 12 ``` ### Results: ```yaml si_sdr: 14.018196157142519 si_sdr_imp: 14.017103133809577 sdr: 14.498517291333885 sdr_imp: 14.463389151567865 sir: 24.149634529133372 sir_imp: 24.11450638936735 sar: 15.338597389045935 sar_imp: -137.30634122401517 stoi: 0.7639416744417206 stoi_imp: 0.1843383526963759 ``` ### License notice: This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
{"tags": ["audacity"], "inference": false, "sample_rate": 8000}
hugggof/ConvTasNet-DAMP-Vocals
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #audacity #region-us
This is an Audacity wrapper for the model, forked from the repository 'groadabike/ConvTasNet_DAMP-VSEP_enhboth', This model was trained using the Asteroid library: URL The following info was copied directly from 'groadabike/ConvTasNet_DAMP-VSEP_enhboth': ### Description: This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset. ### Training config: ### Results: ### License notice: This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
[ "### Description:\nThis model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.", "### Training config:", "### Results:", "### License notice:\nThis work \"ConvTasNet_DAMP-VSEP_enhboth\" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). \"ConvTasNet_DAMP-VSEP_enhboth\" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike." ]
[ "TAGS\n#audacity #region-us \n", "### Description:\nThis model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.", "### Training config:", "### Results:", "### License notice:\nThis work \"ConvTasNet_DAMP-VSEP_enhboth\" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). \"ConvTasNet_DAMP-VSEP_enhboth\" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike." ]
null
null
This is an Audacity wrapper for the model, forked from the repository `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied directly from `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`: Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_noisy` task of the Libri3Mix dataset. Training config: ```yml data: n_src: 3 sample_rate: 16000 segment: 3 task: sep_noisy train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 32 n_filters: 512 stride: 16 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 8 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results: On Libri3Mix min test set : ```yml si_sdr: 5.926151147554517 si_sdr_imp: 10.282912158535625 sdr: 6.700975236867358 sdr_imp: 10.882972447337504 sir: 15.364110064569388 sir_imp: 18.574476587171688 sar: 7.918866830474568 sar_imp: -0.9638973409971135 stoi: 0.7713777027310713 stoi_imp: 0.2078696167973911 ``` License notice: This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). "ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
{"tags": ["audacity"], "inference": false}
hugggof/ConvTasNet_Libri3Mix_sepnoisy_16k
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #audacity #region-us
This is an Audacity wrapper for the model, forked from the repository 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k', This model was trained using the Asteroid library: URL The following info was copied directly from 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k': Description: This model was trained by Joris Cosentino using the librimix recipe in Asteroid. It was trained on the 'sep_noisy' task of the Libri3Mix dataset. Training config: Results: On Libri3Mix min test set : License notice: This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov, used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures dataset by URL, used under CC BY-NC 4.0. "ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino
[]
[ "TAGS\n#audacity #region-us \n" ]
null
null
This is an Audacity wrapper for the model, forked from the repository mpariente/ConvTasNet_WHAM_sepclean, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied from `mpariente/ConvTasNet_WHAM_sepclean`: ### Description: This model was trained by Manuel Pariente using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the WHAM! dataset. ### Training config: ```yaml data: n_src: 2 mode: min nondefault_nsrc: None sample_rate: 8000 segment: 3 task: sep_clean train_dir: data/wav8k/min/tr/ valid_dir: data/wav8k/min/cv/ filterbank: kernel_size: 16 n_filters: 512 stride: 8 main_args: exp_dir: exp/wham gpus: -1 help: None masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 2 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 24 early_stop: True epochs: 200 half_lr: True num_workers: 4 ``` ### Results: ```yaml si_sdr: 16.21326632846293 si_sdr_imp: 16.21441705664987 sdr: 16.615180021738933 sdr_imp: 16.464137807433435 sir: 26.860503975131923 sir_imp: 26.709461760826414 sar: 17.18312813480803 sar_imp: -131.99332048277296 stoi: 0.9619940905157323 stoi_imp: 0.2239480672473015 ``` ### License notice: This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Manuel Pariente.
{"tags": ["audacity"], "inference": false}
hugggof/ConvTasNet_WHAM_sepclean
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #audacity #region-us
This is an Audacity wrapper for the model, forked from the repository mpariente/ConvTasNet_WHAM_sepclean, This model was trained using the Asteroid library: URL The following info was copied from 'mpariente/ConvTasNet_WHAM_sepclean': ### Description: This model was trained by Manuel Pariente using the wham/ConvTasNet recipe in Asteroid. It was trained on the 'sep_clean' task of the WHAM! dataset. ### Training config: ### Results: ### License notice: This work "ConvTasNet_WHAM!_sepclean" is a derivative of CSR-I (WSJ0) Complete by LDC, used under LDC User Agreement for Non-Members (Research only). "ConvTasNet_WHAM!_sepclean" is licensed under Attribution-ShareAlike 3.0 Unported by Manuel Pariente.
[ "### Description:\nThis model was trained by Manuel Pariente \nusing the wham/ConvTasNet recipe in Asteroid.\nIt was trained on the 'sep_clean' task of the WHAM! dataset.", "### Training config:", "### Results:", "### License notice:\nThis work \"ConvTasNet_WHAM!_sepclean\" is a derivative of CSR-I (WSJ0) Complete\nby LDC, used under LDC User Agreement for \nNon-Members (Research only). \n\"ConvTasNet_WHAM!_sepclean\" is licensed under Attribution-ShareAlike 3.0 Unported\nby Manuel Pariente." ]
[ "TAGS\n#audacity #region-us \n", "### Description:\nThis model was trained by Manuel Pariente \nusing the wham/ConvTasNet recipe in Asteroid.\nIt was trained on the 'sep_clean' task of the WHAM! dataset.", "### Training config:", "### Results:", "### License notice:\nThis work \"ConvTasNet_WHAM!_sepclean\" is a derivative of CSR-I (WSJ0) Complete\nby LDC, used under LDC User Agreement for \nNon-Members (Research only). \n\"ConvTasNet_WHAM!_sepclean\" is licensed under Attribution-ShareAlike 3.0 Unported\nby Manuel Pariente." ]
null
null
## Music Source Separation in the Waveform Domain This is the Demucs model, serialized from Facebook Research's pretrained models. From Facebook research: Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder. This is the `demucs_extra` version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data. See [facebookresearch's repository](https://github.com/facebookresearch/demucs) for more information on Demucs.
{"tags": "audacity"}
hugggof/demucs_extra
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #audacity #region-us
## Music Source Separation in the Waveform Domain This is the Demucs model, serialized from Facebook Research's pretrained models. From Facebook research: Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder. This is the 'demucs_extra' version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data. See facebookresearch's repository for more information on Demucs.
[ "## Music Source Separation in the Waveform Domain\n\nThis is the Demucs model, serialized from Facebook Research's pretrained models. \n\nFrom Facebook research:\n\n Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder.\n\n\nThis is the 'demucs_extra' version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data. \n\nSee facebookresearch's repository for more information on Demucs." ]
[ "TAGS\n#audacity #region-us \n", "## Music Source Separation in the Waveform Domain\n\nThis is the Demucs model, serialized from Facebook Research's pretrained models. \n\nFrom Facebook research:\n\n Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder.\n\n\nThis is the 'demucs_extra' version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data. \n\nSee facebookresearch's repository for more information on Demucs." ]
null
null
# Labeler With Timestamps ## Being used for the `Audio Labeler` effect in Audacity This is a audio labeler model which is used in Audacity's labeler effect. metadata: ``` { "sample_rate": 48000, "domain_tags": ["Music"], "tags": ["Audio Labeler"], "effect_type": "waveform-to-labels", "multichannel": false, "labels": ["Acoustic Guitar", "Auxiliary Percussion", "Brass", "Clean Electric Guitar", "Distorted Electric Guitar", "Double Bass", "Drum Set", "Electric Bass", "Flute", "piano", "Reeds", "Saxophone", "Strings", "Trumpet", "Voice"], "short_description": "Use me to label some instruments!", "long_description": "An audio labeler, which outputs label predictions and time ranges for the labels. This model can label various instruments listed in the labels section." } ```
{"tags": ["audacity"], "inference": false}
hugggof/openl3-labeler-w-timestamps
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #audacity #region-us
# Labeler With Timestamps ## Being used for the 'Audio Labeler' effect in Audacity This is a audio labeler model which is used in Audacity's labeler effect. metadata:
[ "# Labeler With Timestamps", "## Being used for the 'Audio Labeler' effect in Audacity\n\nThis is a audio labeler model which is used in Audacity's labeler effect. \n\nmetadata:" ]
[ "TAGS\n#audacity #region-us \n", "# Labeler With Timestamps", "## Being used for the 'Audio Labeler' effect in Audacity\n\nThis is a audio labeler model which is used in Audacity's labeler effect. \n\nmetadata:" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/9fd98af9a817af8cd78636f71895b6ad.500x500x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div> <a href="https://genius.com/artists/100-gecs"> <div style="text-align: center; font-size: 14px;">@100-gecs</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 100 gecs. Dataset is available [here](https://huggingface.co/datasets/huggingartists/100-gecs). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/100-gecs") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3c9j4tvq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 100 gecs's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/100-gecs') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/100-gecs") model = AutoModelWithLMHead.from_pretrained("huggingartists/100-gecs") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/100-gecs"], "widget": [{"text": "I am"}]}
huggingartists/100-gecs
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/100-gecs", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/100-gecs #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div> <a href="URL <div style="text-align: center; font-size: 14px;">@100-gecs</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 100 gecs. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 100 gecs's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 100 gecs.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 100 gecs's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/100-gecs #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 100 gecs.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 100 gecs's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/aa32202cc20d1dde62e57940a8b278b2.770x770x1.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">21 Savage</div> <a href="https://genius.com/artists/21-savage"> <div style="text-align: center; font-size: 14px;">@21-savage</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 21 Savage. Dataset is available [here](https://huggingface.co/datasets/huggingartists/21-savage). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/21-savage") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lbkznnf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 21 Savage's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1fw9b6m4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1fw9b6m4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/21-savage') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/21-savage") model = AutoModelWithLMHead.from_pretrained("huggingartists/21-savage") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/21-savage"], "widget": [{"text": "I am"}]}
huggingartists/21-savage
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/21-savage", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/21-savage #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">21 Savage</div> <a href="URL <div style="text-align: center; font-size: 14px;">@21-savage</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 21 Savage. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 21 Savage's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 21 Savage.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 21 Savage's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/21-savage #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 21 Savage.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 21 Savage's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4fedc5dd2830a874a5274bf1cac62002.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div> <a href="https://genius.com/artists/25-17"> <div style="text-align: center; font-size: 14px;">@25-17</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 25/17. Dataset is available [here](https://huggingface.co/datasets/huggingartists/25-17). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/25-17") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1iuytbjp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 25/17's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/25-17') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/25-17") model = AutoModelWithLMHead.from_pretrained("huggingartists/25-17") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/25-17"], "widget": [{"text": "I am"}]}
huggingartists/25-17
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/25-17", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/25-17 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div> <a href="URL <div style="text-align: center; font-size: 14px;">@25-17</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 25/17. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 25/17's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 25/17.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 25/17's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/25-17 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 25/17.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 25/17's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/10f98dca7bcd1a31222e36374544cad5.1000x1000x1.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">50 Cent</div> <a href="https://genius.com/artists/50-cent"> <div style="text-align: center; font-size: 14px;">@50-cent</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 50 Cent. Dataset is available [here](https://huggingface.co/datasets/huggingartists/50-cent). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/50-cent") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1291qx5n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 50 Cent's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1igwpphq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1igwpphq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/50-cent') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/50-cent") model = AutoModelWithLMHead.from_pretrained("huggingartists/50-cent") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/50-cent"], "widget": [{"text": "I am"}]}
huggingartists/50-cent
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/50-cent", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/50-cent #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">50 Cent</div> <a href="URL <div style="text-align: center; font-size: 14px;">@50-cent</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 50 Cent. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 50 Cent's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 50 Cent.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 50 Cent's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/50-cent #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 50 Cent.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 50 Cent's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/289ded19d51d41798be99217d6059eb3.458x458x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">5’Nizza</div> <a href="https://genius.com/artists/5nizza"> <div style="text-align: center; font-size: 14px;">@5nizza</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 5’Nizza. Dataset is available [here](https://huggingface.co/datasets/huggingartists/5nizza). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/5nizza") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1zcp1grf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 5’Nizza's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/5nizza') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/5nizza") model = AutoModelWithLMHead.from_pretrained("huggingartists/5nizza") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/5nizza"], "widget": [{"text": "I am"}]}
huggingartists/5nizza
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/5nizza", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/5nizza #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">5’Nizza</div> <a href="URL <div style="text-align: center; font-size: 14px;">@5nizza</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 5’Nizza. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 5’Nizza's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 5’Nizza.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 5’Nizza's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/5nizza #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 5’Nizza.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 5’Nizza's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/c56dce03a151e17a9626e55e6c295bb1.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">5opka</div> <a href="https://genius.com/artists/5opka"> <div style="text-align: center; font-size: 14px;">@5opka</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 5opka. Dataset is available [here](https://huggingface.co/datasets/huggingartists/5opka). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/5opka") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1o2s4fw8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 5opka's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3vitposx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3vitposx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/5opka') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/5opka") model = AutoModelWithLMHead.from_pretrained("huggingartists/5opka") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/5opka"], "widget": [{"text": "I am"}]}
huggingartists/5opka
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/5opka", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/5opka #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">5opka</div> <a href="URL <div style="text-align: center; font-size: 14px;">@5opka</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 5opka. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 5opka's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 5opka.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 5opka's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/5opka #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 5opka.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 5opka's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/b2b164a7c6c02dd0843ad597df5dbf4b.1000x1000x1.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">6ix9ine</div> <a href="https://genius.com/artists/6ix9ine"> <div style="text-align: center; font-size: 14px;">@6ix9ine</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 6ix9ine. Dataset is available [here](https://huggingface.co/datasets/huggingartists/6ix9ine). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/6ix9ine") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/eqmcaj0r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 6ix9ine's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/s5dpg3h2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/s5dpg3h2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/6ix9ine') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/6ix9ine") model = AutoModelWithLMHead.from_pretrained("huggingartists/6ix9ine") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/6ix9ine"], "widget": [{"text": "I am"}]}
huggingartists/6ix9ine
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/6ix9ine", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/6ix9ine #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">6ix9ine</div> <a href="URL <div style="text-align: center; font-size: 14px;">@6ix9ine</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from 6ix9ine. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on 6ix9ine's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 6ix9ine.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 6ix9ine's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/6ix9ine #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from 6ix9ine.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on 6ix9ine's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
text-generation
transformers
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/894021d09a748eef8c6d63ad898b814b.650x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aaron Watson</div> <a href="https://genius.com/artists/aaron-watson"> <div style="text-align: center; font-size: 14px;">@aaron-watson</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Aaron Watson. Dataset is available [here](https://huggingface.co/datasets/huggingartists/aaron-watson). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/aaron-watson") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/14ha1tnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Aaron Watson's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/aaron-watson') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/aaron-watson") model = AutoModelWithLMHead.from_pretrained("huggingartists/aaron-watson") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
{"language": "en", "tags": ["huggingartists", "lyrics", "lm-head", "causal-lm"], "datasets": ["huggingartists/aaron-watson"], "widget": [{"text": "I am"}]}
huggingartists/aaron-watson
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/aaron-watson", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/aaron-watson #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;URL </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800"> HuggingArtists Model </div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aaron Watson</div> <a href="URL <div style="text-align: center; font-size: 14px;">@aaron-watson</div> </a> </div> I was made with huggingartists. Create your own bot based on your favorite artist with the demo! ## How does it work? To understand how the model was developed, check the W&B report. ## Training data The model was trained on lyrics from Aaron Watson. Dataset is available here. And can be used with: Explore the data, which is tracked with W&B artifacts at every step of the pipeline. ## Training procedure The model is based on a pre-trained GPT-2 which is fine-tuned on Aaron Watson's lyrics. Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility. At the end of training, the final model is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: Or with Transformers library: ## Limitations and bias The model suffers from the same limitations and bias as GPT-2. In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* ![Follow](URL ![Follow](URL ![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. ![GitHub stars](URL
[ "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from Aaron Watson.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on Aaron Watson's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #huggingartists #lyrics #lm-head #causal-lm #en #dataset-huggingartists/aaron-watson #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## How does it work?\n\nTo understand how the model was developed, check the W&B report.", "## Training data\n\nThe model was trained on lyrics from Aaron Watson.\n\nDataset is available here.\nAnd can be used with:\n\n\n\nExplore the data, which is tracked with W&B artifacts at every step of the pipeline.", "## Training procedure\n\nThe model is based on a pre-trained GPT-2 which is fine-tuned on Aaron Watson's lyrics.\n\nHyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.\n\nAt the end of training, the final model is logged and versioned.", "## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n\n\nOr with Transformers library:", "## Limitations and bias\n\nThe model suffers from the same limitations and bias as GPT-2.\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.", "## About\n\n*Built by Aleksey Korshuk*\n\n![Follow](URL\n\n![Follow](URL\n\n![Follow](https://t.me/joinchat/_CQ04KjcJ-4yZTky)\n\nFor more details, visit the project repository.\n\n![GitHub stars](URL" ]