pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
text2text-generation
transformers
# legal_t5_small_trans_fr_it model Model on translating legal text from French to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "considérant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, dégradations à l'appui," pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_it | 46.45| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Italian", "tags": ["translation French Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "consid\u00e9rant la multiplication des constructions qui ne respectent pas la culture des lieux et leur paysage particulier, d\u00e9gradations \u00e0 l'appui,"}]}
SEBIS/legal_t5_small_trans_fr_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "French Italian" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation French Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_fr\_it model ===================================== Model on translating legal text from French to Italian. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_fr\_it is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_fr\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from French to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation French Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from French to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_fr_it_small_finetuned model Model on translating legal text from French to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_it", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Le vote a lieu dans un délai de deux mois après réception de la proposition, à moins qu'à la demande de la commission compétente, d'un groupe politique ou de quarante députés au moins, le Parlement n'en décide autrement." pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_it_small_finetuned | 46.309| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Italian", "tags": ["translation French Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Le vote a lieu dans un d\u00e9lai de deux mois apr\u00e8s r\u00e9ception de la proposition, \u00e0 moins qu'\u00e0 la demande de la commission comp\u00e9tente, d'un groupe politique ou de quarante d\u00e9put\u00e9s au moins, le Parlement n'en d\u00e9cide autrement."}]}
SEBIS/legal_t5_small_trans_fr_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "French Italian" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation French Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_fr\_it\_small\_finetuned model ======================================================= Model on translating legal text from French to Italian. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_fr\_it\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_fr\_it\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from French to Italian. ### How to use Here is how to use this model to translate legal text from French to Italian in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_fr\_it\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from French to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_it\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation French Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from French to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_it\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_fr_sv model Model on translating legal text from French to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "posée conformément à l'article 43 du règlement" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_sv | 41.9| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Swedish", "tags": ["translation French Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "pos\u00e9e conform\u00e9ment \u00e0 l'article 43 du r\u00e8glement"}]}
SEBIS/legal_t5_small_trans_fr_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "French Swedish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation French Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_fr\_sv model ===================================== Model on translating legal text from French to Swedish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_fr\_sv is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_fr\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from French to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation French Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from French to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_fr_sv_small_finetuned model Model on translating legal text from French to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_fr_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_fr_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) fr_text = "Budget 2009: Section III - Commission" pipeline([fr_text], max_length=512) ``` ## Training data The legal_t5_small_trans_fr_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_fr_sv_small_finetuned | 41.768| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "French Swedish", "tags": ["translation French Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Budget 2009: Section III - Commission"}]}
SEBIS/legal_t5_small_trans_fr_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation French Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "French Swedish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation French Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_fr\_sv\_small\_finetuned model ======================================================= Model on translating legal text from French to Swedish. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_fr\_sv\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_fr\_sv\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from French to Swedish. ### How to use Here is how to use this model to translate legal text from French to Swedish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_fr\_sv\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from French to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_sv\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation French Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from French to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_fr\\_sv\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_cs model Model on translating legal text from Italian to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "sull'aumento dei prezzi dei prodotti alimentari" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_cs | 43.302| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Cszech", "tags": ["translation Italian Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "sull'aumento dei prezzi dei prodotti alimentari"}]}
SEBIS/legal_t5_small_trans_it_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Cszech" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_cs model ===================================== Model on translating legal text from Italian to Cszech. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_cs is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_cs_small_finetuned model Model on translating legal text from Italian to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Il consiglio di amministrazione è assistito da un comitato esecutivo." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_cs_small_finetuned | 43.236| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Cszech", "tags": ["translation Italian Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Il consiglio di amministrazione \u00e8 assistito da un comitato esecutivo."}]}
SEBIS/legal_t5_small_trans_it_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Cszech" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_cs\_small\_finetuned model ======================================================= Model on translating legal text from Italian to Cszech. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_cs\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_cs\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Cszech. ### How to use Here is how to use this model to translate legal text from Italian to Cszech in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_cs\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_cs\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_cs\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_de model Model on translating legal text from Italian to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualità, urgenti e di notevole rilevanza" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_de | 40.615| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Deustch", "tags": ["translation Italian Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "presentata con richiesta di iscrizione all'ordine del giorno della discussione su problemi di attualit\u00e0, urgenti e di notevole rilevanza"}]}
SEBIS/legal_t5_small_trans_it_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Deustch" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_de model ===================================== Model on translating legal text from Italian to Deustch. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_de is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_de_small_finetuned model Model on translating legal text from Italian to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_de", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Interventi sulla votazione:" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_de_small_finetuned | 40.524| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Deustch", "tags": ["translation Italian Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Interventi sulla votazione:"}]}
SEBIS/legal_t5_small_trans_it_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Deustch" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_de\_small\_finetuned model ======================================================= Model on translating legal text from Italian to Deustch. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_de\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_de\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Deustch. ### How to use Here is how to use this model to translate legal text from Italian to Deustch in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_de\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_de\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_de\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_en model Model on translating legal text from Italian to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Oggetto: Libertà di culto in Turchia" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_en | 50.068| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian English", "tags": ["translation Italian English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Oggetto: Libert\u00e0 di culto in Turchia"}]}
SEBIS/legal_t5_small_trans_it_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian English" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_en model ===================================== Model on translating legal text from Italian to English. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_en is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_en_small_finetuned model Model on translating legal text from Italian to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Supplenti presenti al momento della votazione finale" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_en_small_finetuned | 49.840| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian English", "tags": ["translation Italian English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Supplenti presenti al momento della votazione finale"}]}
SEBIS/legal_t5_small_trans_it_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian English" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_en\_small\_finetuned model ======================================================= Model on translating legal text from Italian to English. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_en\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_en\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to English. ### How to use Here is how to use this model to translate legal text from Italian to English in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_en\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_en\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_en\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_es model Model on translating legal text from Italian to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_es | 48.998| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Spanish", "tags": ["translation Italian Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Risoluzione del Parlamento europeo sulle perquisizioni effettuate ad Ankara nella sede principale dell'Associazione per i diritti dell'uomo in Turchia"}]}
SEBIS/legal_t5_small_trans_it_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Spanish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_es model ===================================== Model on translating legal text from Italian to Spanish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_es is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_es_small_finetuned model Model on translating legal text from Italian to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_es", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si è dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si è detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonché ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ciò al fine di agevolare i colloqui di pace," pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_es_small_finetuned | 49.083| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Spanish", "tags": ["translation Italian Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "considerando che il 28 marzo 2002 il Consiglio di sicurezza dell'ONU si \u00e8 dichiarato favorevole all'attuazione integrale del Protocollo di Lusaka e si \u00e8 detto disposto a cooperare con tutte le parti in conflitto per raggiungere tale obiettivo, nonch\u00e9 ad avviare consultazioni con il governo dell'Angola per ricercare i mezzi con cui modificare le sanzioni imposte all'UNITA attraverso la risoluzione 1127 (1997), e ci\u00f2 al fine di agevolare i colloqui di pace,"}]}
SEBIS/legal_t5_small_trans_it_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Spanish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_es\_small\_finetuned model ======================================================= Model on translating legal text from Italian to Spanish. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_es\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_es\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Spanish. ### How to use Here is how to use this model to translate legal text from Italian to Spanish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_es\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_es\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_es\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_fr model Model on translating legal text from Italian to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformità del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA." pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_fr | 50.559| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian French", "tags": ["translation Italian French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Qualora gli emendamenti approvati dal Parlamento abbiano l'effetto di aumentare le spese iscritte nel progetto di bilancio oltre il tasso massimo previsto, la commissione competente per il merito sottopone al Parlamento una proposta intesa a fissare un nuovo tasso massimo in conformit\u00e0 del paragrafo 9, ultimo comma, degli articoli 78 del trattato CECA, 272 del trattato CE e 177 del trattato CEEA."}]}
SEBIS/legal_t5_small_trans_it_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian French" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_fr model ===================================== Model on translating legal text from Italian to French. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_fr is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_fr_small_finetuned model Model on translating legal text from Italian to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Dichiarazioni del Consiglio e della Commissione" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_fr_small_finetuned | 50.557| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian French", "tags": ["translation Italian French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Dichiarazioni del Consiglio e della Commissione"}]}
SEBIS/legal_t5_small_trans_it_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian French" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_fr\_small\_finetuned model ======================================================= Model on translating legal text from Italian to French. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_fr\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_fr\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to French. ### How to use Here is how to use this model to translate legal text from Italian to French in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_fr\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_fr\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_fr\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_sv model Model on translating legal text from Italian to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_sv is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "K. considerando che, come avviene con tutti i sistemi di sanità elettronica, la progettazione, lo sviluppo e l’attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull’etica)," pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_sv model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_sv | 41.508| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Swedish", "tags": ["translation Italian Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "K. considerando che, come avviene con tutti i sistemi di sanit\u00e0 elettronica, la progettazione, lo sviluppo e l\u2019attuazione di sistemi abilitati alla tecnologia RFID presuppongono il coinvolgimento diretto dei professionisti sanitari, dei pazienti e delle commissioni competenti (per esempio, sulla protezione dei dati e sull\u2019etica),"}]}
SEBIS/legal_t5_small_trans_it_sv
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Swedish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_sv model ===================================== Model on translating legal text from Italian to Swedish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_sv is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_sv model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_it_sv_small_finetuned model Model on translating legal text from Italian to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_it_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_sv_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "Cooperazione rafforzata Annuncio in Aula" pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_trans_it_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_it_sv_small_finetuned | 41.243| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Italian Swedish", "tags": ["translation Italian Swedish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Cooperazione rafforzata Annuncio in Aula"}]}
SEBIS/legal_t5_small_trans_it_sv_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Italian Swedish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Italian Swedish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Italian Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_it\_sv\_small\_finetuned model ======================================================= Model on translating legal text from Italian to Swedish. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_it\_sv\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_it\_sv\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Italian to Swedish. ### How to use Here is how to use this model to translate legal text from Italian to Swedish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_it\_sv\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_sv\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Italian Swedish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Italian to Swedish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_it\\_sv\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_cs model Model on translating legal text from Swedish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "En kvalitetscertifiering av administrativa förfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likvärdiga villkor för sjöfartsadministrationer." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_cs | 45.569| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Cszech", "tags": ["translation Swedish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "En kvalitetscertifiering av administrativa f\u00f6rfaranden i enlighet med ISO eller motsvarande normer skulle dessutom leda till likv\u00e4rdiga villkor f\u00f6r sj\u00f6fartsadministrationer."}]}
SEBIS/legal_t5_small_trans_sv_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Cszech" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_cs model ===================================== Model on translating legal text from Swedish to Cszech. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_cs is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_cs model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_cs_small_finetuned model Model on translating legal text from Swedish to Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_cs_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_cs_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_cs_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Kommissionens personal och extern personal som bemyndigas av kommissionen måste få tillträde till bidragsmottagarens lokaler och tillgång till all information som behövs för att genomföra sådana revisioner, inbegripet information i elektronisk form." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_cs_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_cs_small_finetuned | 45.472| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Cszech", "tags": ["translation Swedish Cszech model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kommissionens personal och extern personal som bemyndigas av kommissionen m\u00e5ste f\u00e5 tilltr\u00e4de till bidragsmottagarens lokaler och tillg\u00e5ng till all information som beh\u00f6vs f\u00f6r att genomf\u00f6ra s\u00e5dana revisioner, inbegripet information i elektronisk form."}]}
SEBIS/legal_t5_small_trans_sv_cs_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Cszech model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Cszech" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_cs\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to Cszech. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_cs\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_cs\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Cszech. ### How to use Here is how to use this model to translate legal text from Swedish to Cszech in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_cs\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_cs\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Cszech model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_cs\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_de model Model on translating legal text from Swedish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "b) Bekämpning av skadegörare inom skogsbruket." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_de | 40.264| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Deustch", "tags": ["translation Swedish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "b) Bek\u00e4mpning av skadeg\u00f6rare inom skogsbruket."}]}
SEBIS/legal_t5_small_trans_sv_de
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Deustch" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_de model ===================================== Model on translating legal text from Swedish to Deustch. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_de is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_de model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_de_small_finetuned model Model on translating legal text from Swedish to Deustch. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_de_small_finetuned | 40.240| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Deustch", "tags": ["translation Swedish Deustch model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "G. M\u00e4ns och kvinnors f\u00f6rm\u00e5ga att delta p\u00e5 lika villkor i det politiska livet och i beslutsfattandet \u00e4r en grundl\u00e4ggande f\u00f6ruts\u00e4ttning f\u00f6r en verklig demokrati."}]}
SEBIS/legal_t5_small_trans_sv_de_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Deustch model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Deustch" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_de\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to Deustch. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_de\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_de\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Deustch. ### How to use Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_de\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_de\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Deustch model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Deustch in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_de\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_en model Model on translating legal text from Swedish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Om rättsliga förfaranden inleds rörande omständigheter som ombudsmannen utreder skall han avsluta ärendet." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_en | 52.025| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish English", "tags": ["translation Swedish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Om r\u00e4ttsliga f\u00f6rfaranden inleds r\u00f6rande omst\u00e4ndigheter som ombudsmannen utreder skall han avsluta \u00e4rendet."}]}
SEBIS/legal_t5_small_trans_sv_en
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish English" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_en model ===================================== Model on translating legal text from Swedish to English. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_en is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_en model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_en_small_finetuned model Model on translating legal text from Swedish to English. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Alejo Vidal-Quadras : 262 röster" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_en_small_finetuned | 52.084| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish English", "tags": ["translation Swedish English model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Alejo Vidal-Quadras : 262 r\u00f6ster"}]}
SEBIS/legal_t5_small_trans_sv_en_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish English model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish English" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_en\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to English. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_en\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_en\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to English. ### How to use Here is how to use this model to translate legal text from Swedish to English in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_en\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_en\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish English model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to English in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_en\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 9 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_es model Model on translating legal text from Swedish to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Monika Flašíková Beňová (S&D)" pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_es | 47.407| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Spanish", "tags": ["translation Swedish Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Monika Fla\u0161\u00edkov\u00e1 Be\u0148ov\u00e1 (S&D)"}]}
SEBIS/legal_t5_small_trans_sv_es
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Spanish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_es model ===================================== Model on translating legal text from Swedish to Spanish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_es is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_es model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_es_small_finetuned model Model on translating legal text from Swedish to Spanish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_es_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_es", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "– med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 )," pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_es_small_finetuned | 47.411| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Spanish", "tags": ["translation Swedish Spanish model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u2013 med beaktande av kommissionen vitbok om idrott ( KOM(2007)0391 ),"}]}
SEBIS/legal_t5_small_trans_sv_es_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Spanish model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Spanish" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_es\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to Spanish. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_es\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_es\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Spanish. ### How to use Here is how to use this model to translate legal text from Swedish to Spanish in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_es\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_es\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Spanish model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Spanish in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_es\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_fr model Model on translating legal text from Swedish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_fr is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Kunden måste ha rätt att avsäga sig information i skriftlig form." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_fr model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_fr | 47.623| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish French", "tags": ["translation Swedish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Kunden m\u00e5ste ha r\u00e4tt att avs\u00e4ga sig information i skriftlig form."}]}
SEBIS/legal_t5_small_trans_sv_fr
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish French" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_fr model ===================================== Model on translating legal text from Swedish to French. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_fr is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_fr model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_fr_small_finetuned model Model on translating legal text from Swedish to French. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_fr_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_fr_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_fr_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_fr", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Samreglering bör följa samma principer som de formella bestämmelserna, vilket betyder att den bör vara objektiv, välgrundad, proportionell och icke-diskriminerande, och bör möjliggöra insyn." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_fr_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_fr_small_finetuned | 47.508| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish French", "tags": ["translation Swedish French model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Samreglering b\u00f6r f\u00f6lja samma principer som de formella best\u00e4mmelserna, vilket betyder att den b\u00f6r vara objektiv, v\u00e4lgrundad, proportionell och icke-diskriminerande, och b\u00f6r m\u00f6jligg\u00f6ra insyn."}]}
SEBIS/legal_t5_small_trans_sv_fr_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish French model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish French" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_fr\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to French. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_fr\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_fr\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to French. ### How to use Here is how to use this model to translate legal text from Swedish to French in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_fr\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_fr\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish French model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to French in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_fr\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_it model Model on translating legal text from Swedish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "Den 25 juni 2002 lade kommissionen fram ett förslag till förordning om ”kontroller av kontanta medel som förs in i eller ut ur gemenskapen” i syfte att komplettera direktiv 91/308/EEG om penningtvätt." pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_it | 42.577| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Italian", "tags": ["translation Swedish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "Den 25 juni 2002 lade kommissionen fram ett f\u00f6rslag till f\u00f6rordning om \u201dkontroller av kontanta medel som f\u00f6rs in i eller ut ur gemenskapen\u201d i syfte att komplettera direktiv 91/308/EEG om penningtv\u00e4tt."}]}
SEBIS/legal_t5_small_trans_sv_it
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Italian" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_it model ===================================== Model on translating legal text from Swedish to Italian. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_it is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_it model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 5 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text2text-generation
transformers
# legal_t5_small_trans_sv_it_small_finetuned model Model on translating legal text from Swedish to Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. ## Model description legal_t5_small_trans_sv_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_it_small_finetuned"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_it", do_lower_case=False, skip_special_tokens=True), device=0 ) sv_text = "– med beaktande av rådet beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena från unionens höga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva åtgärderna mot den syriska regimen," pipeline([sv_text], max_length=512) ``` ## Training data The legal_t5_small_trans_sv_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_trans_sv_it_small_finetuned | 42.575| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Swedish Italian", "tags": ["translation Swedish Italian model"], "datasets": ["dcep europarl jrc-acquis"], "widget": [{"text": "\u2013 med beaktande av r\u00e5det beslut om Syrien av den 12 april, 9 och 23 maj, 20 och 25 juni samt den 2 september 2011 och av uttalandena fr\u00e5n unionens h\u00f6ga representant av den 9, 23 och 29 april, 9 maj, 6, 9 och 11 juni, 9 och 31 juli, 1, 4, 18 och 30 augusti samt den 2 september 2011 om en utvidgning av de restriktiva \u00e5tg\u00e4rderna mot den syriska regimen,"}]}
SEBIS/legal_t5_small_trans_sv_it_small_finetuned
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation Swedish Italian model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Swedish Italian" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_trans\_sv\_it\_small\_finetuned model ======================================================= Model on translating legal text from Swedish to Italian. It was first released in this repository. This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep. Model description ----------------- legal\_t5\_small\_trans\_sv\_it\_small\_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal\_t5\_small\_trans\_sv\_it\_small\_finetuned is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for translation of legal texts from Swedish to Italian. ### How to use Here is how to use this model to translate legal text from Swedish to Italian in PyTorch: Training data ------------- The legal\_t5\_small\_trans\_sv\_it\_small\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly. Evaluation results ------------------ When the model is used for translation test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_it\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #translation Swedish Italian model #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to translate legal text from Swedish to Italian in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_trans\\_sv\\_it\\_small\\_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on JRC-ACQUIS, EUROPARL, and DCEP dataset consisting of 8 Million parallel texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nThe pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for translation test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6560 - Accuracy: 0.8219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5161 | 1.0 | 24544 | 0.5025 | 0.8037 | | 0.4176 | 2.0 | 49088 | 0.5274 | 0.8131 | | 0.3154 | 3.0 | 73632 | 0.5348 | 0.8194 | | 0.2294 | 4.0 | 98176 | 0.6560 | 0.8219 | | 0.1827 | 5.0 | 122720 | 0.8190 | 0.8203 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.82190524707081, "name": "Accuracy"}]}]}]}
SEISHIN/distilbert-base-uncased-finetuned-mnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-mnli ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6560 * Accuracy: 0.8219 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0605 - Precision: 0.9289 - Recall: 0.9387 - F1: 0.9338 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2388 | 1.0 | 878 | 0.0671 | 0.9162 | 0.9211 | 0.9187 | 0.9813 | | 0.0504 | 2.0 | 1756 | 0.0602 | 0.9225 | 0.9366 | 0.9295 | 0.9834 | | 0.0299 | 3.0 | 2634 | 0.0605 | 0.9289 | 0.9387 | 0.9338 | 0.9843 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9289272666888077, "name": "Precision"}, {"type": "recall", "value": 0.9386956035350711, "name": "Recall"}, {"type": "f1", "value": 0.933785889160917, "name": "F1"}, {"type": "accuracy", "value": 0.9842565968195466, "name": "Accuracy"}]}]}]}
SEISHIN/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0605 * Precision: 0.9289 * Recall: 0.9387 * F1: 0.9338 * Accuracy: 0.9843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1605 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2172 | 1.0 | 5533 | 1.1532 | | 0.9446 | 2.0 | 11066 | 1.1184 | | 0.7671 | 3.0 | 16599 | 1.1605 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
SEISHIN/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-squad ======================================= This model is a fine-tuned version of distilbert-base-uncased on the squad dataset. It achieves the following results on the evaluation set: * Loss: 1.1605 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
GPT2-first-model
{}
SIC98/GPT2-first-model
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT2-first-model
[]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
Github - https://github.com/SIC98/GPT2-python-code-generator
{}
SIC98/GPT2-python-code-generator
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Github - URL
[]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
fill-mask
transformers
# SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert") model = AutoModel.from_pretrained("SIKU-BERT/sikubert") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "roberta", "pytorch"], "thumbnail": "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png", "inference": false}
SIKU-BERT/sikubert
null
[ "transformers", "pytorch", "bert", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #roberta #zh #license-apache-2.0 #autotrain_compatible #region-us
# SikuBERT ## Model description !SikuBERT Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT ![Github icon](URL
[ "# SikuBERT", "## Model description\n!SikuBERT\nDigital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.", "## How to use", "## About Us\nWe are from Nanjing Agricultural University.\n> Created with by SIKU-BERT ![Github icon](URL" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #roberta #zh #license-apache-2.0 #autotrain_compatible #region-us \n", "# SikuBERT", "## Model description\n!SikuBERT\nDigital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.", "## How to use", "## About Us\nWe are from Nanjing Agricultural University.\n> Created with by SIKU-BERT ![Github icon](URL" ]
fill-mask
transformers
# SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta") model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "roberta", "pytorch"], "thumbnail": "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png", "inference": false}
SIKU-BERT/sikuroberta
null
[ "transformers", "pytorch", "bert", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #bert #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #roberta #zh #license-apache-2.0 #autotrain_compatible #region-us
# SikuBERT ## Model description !SikuBERT Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT ![Github icon](URL
[ "# SikuBERT", "## Model description\n!SikuBERT\nDigital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.", "## How to use", "## About Us\nWe are from Nanjing Agricultural University.\n> Created with by SIKU-BERT ![Github icon](URL" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #roberta #zh #license-apache-2.0 #autotrain_compatible #region-us \n", "# SikuBERT", "## Model description\n!SikuBERT\nDigital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.", "## How to use", "## About Us\nWe are from Nanjing Agricultural University.\n> Created with by SIKU-BERT ![Github icon](URL" ]
text-generation
transformers
# RickBot
{"tags": ["conversational"]}
SJSui/RickBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RickBot
[ "# RickBot" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RickBot" ]
text-generation
transformers
## LiveSafe chatbot response generation model based on DialogGPT
{"license": "mit", "tags": ["conversational"]}
SPGT/LiveSafe-DialoGPT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## LiveSafe chatbot response generation model based on DialogGPT
[ "## LiveSafe chatbot response generation model based on DialogGPT" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## LiveSafe chatbot response generation model based on DialogGPT" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "test", "results": []}]}
SS8/test
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# test This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
[ "# test\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# test\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- TensorFlow 2.7.0\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2510 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.2510 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "test2", "results": []}]}
SS8/test2
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
test2 ===== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.2510 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 7810, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.16.0.dev0 * TensorFlow 2.7.0 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 7810, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 7810, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
null
null
just a test
{}
SSY/mytest
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
just a test
[]
[ "TAGS\n#region-us \n" ]
null
transformers
# huBERT base model (cased) ## Model description Cased BERT model for Hungarian, trained on the (filtered, deduplicated) Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia. ## Intended uses & limitations The model can be used as any other (cased) BERT model. It has been tested on the chunking and named entity recognition tasks and set a new state-of-the-art on the former. ## Training Details of the training data and procedure can be found in the PhD thesis linked below. (With the caveat that it only contains preliminary results based on the Wikipedia subcorpus. Evaluation of the full model will appear in a future paper.) ## Eval results When fine-tuned (via `BertForTokenClassification`) on chunking and NER, the model outperforms multilingual BERT, achieves state-of-the-art results on both tasks. The exact scores are | NER | Minimal NP | Maximal NP | |-----|------------|------------| | **97.62%** | **97.14%** | **96.97%** | ### BibTeX entry and citation info If you use the model, please cite the following papers: [Nemeskey, Dávid Márk (2020). "Natural Language Processing Methods for Language Modeling." PhD Thesis. Eötvös Loránd University.](https://hlt.bme.hu/en/publ/nemeskey_2020) Bibtex: ```bibtex @PhDThesis{ Nemeskey:2020, author = {Nemeskey, Dávid Márk}, title = {Natural Language Processing Methods for Language Modeling}, year = {2020}, school = {E\"otv\"os Lor\'and University} } ``` [Nemeskey, Dávid Márk (2021). "Introducing huBERT." In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14](https://hlt.bme.hu/en/publ/hubert_2021) Bibtex: ```bibtex @InProceedings{ Nemeskey:2021a, author = {Nemeskey, Dávid Márk}, title = {Introducing \texttt{huBERT}}, booktitle = {{XVII}.\ Magyar Sz{\'a}m{\'i}t{\'o}g{\'e}pes Nyelv{\'e}szeti Konferencia ({MSZNY}2021)}, year = 2021, pages = {TBA}, address = {Szeged}, } ```
{"language": "hu", "license": "apache-2.0", "datasets": ["common_crawl", "wikipedia"]}
SZTAKI-HLT/hubert-base-cc
null
[ "transformers", "pytorch", "tf", "jax", "bert", "hu", "dataset:common_crawl", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hu" ]
TAGS #transformers #pytorch #tf #jax #bert #hu #dataset-common_crawl #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #has_space #region-us
huBERT base model (cased) ========================= Model description ----------------- Cased BERT model for Hungarian, trained on the (filtered, deduplicated) Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia. Intended uses & limitations --------------------------- The model can be used as any other (cased) BERT model. It has been tested on the chunking and named entity recognition tasks and set a new state-of-the-art on the former. Training -------- Details of the training data and procedure can be found in the PhD thesis linked below. (With the caveat that it only contains preliminary results based on the Wikipedia subcorpus. Evaluation of the full model will appear in a future paper.) Eval results ------------ When fine-tuned (via 'BertForTokenClassification') on chunking and NER, the model outperforms multilingual BERT, achieves state-of-the-art results on both tasks. The exact scores are NER: 97.62%, Minimal NP: 97.14%, Maximal NP: 96.97% ### BibTeX entry and citation info If you use the model, please cite the following papers: Nemeskey, Dávid Márk (2020). "Natural Language Processing Methods for Language Modeling." PhD Thesis. Eötvös Loránd University. Bibtex: Nemeskey, Dávid Márk (2021). "Introducing huBERT." In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14 Bibtex:
[ "### BibTeX entry and citation info\n\n\nIf you use the model, please cite the following papers:\n\n\nNemeskey, Dávid Márk (2020). \"Natural Language Processing Methods for Language Modeling.\" PhD Thesis. Eötvös Loránd University.\n\n\nBibtex:\n\n\nNemeskey, Dávid Márk (2021). \"Introducing huBERT.\" In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14\n\n\nBibtex:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #hu #dataset-common_crawl #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "### BibTeX entry and citation info\n\n\nIf you use the model, please cite the following papers:\n\n\nNemeskey, Dávid Márk (2020). \"Natural Language Processing Methods for Language Modeling.\" PhD Thesis. Eötvös Loránd University.\n\n\nBibtex:\n\n\nNemeskey, Dávid Márk (2021). \"Introducing huBERT.\" In: XVII. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2021). Szeged, pp. 3-14\n\n\nBibtex:" ]
text-generation
transformers
# Jett DialoGPT Model
{"tags": ["conversational"]}
SaffronIce/DialoGPT-medium-Jett
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Jett DialoGPT Model
[ "# Jett DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Jett DialoGPT Model" ]
question-answering
transformers
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GBERT Large by deepset.ai ## MLQA DEV (german) EM: 63.82 F1: 77.20 ## XQUAD TEST (german) EM: 65.96 F1: 80.85 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GBERTQnA", tokenizer="Sahajtomar/GBERTQnA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.9815779328346252, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": 'Im Juli 1944 befand sich die Rote Armee tief auf polnischem Gebiet und verfolgte die Deutschen in Richtung Warschau. In dem Wissen, dass Stalin der Idee eines unabhängigen Polens feindlich gegenüberstand, gab die polnische Exilregierung in London der unterirdischen Heimatarmee (AK) den Befehl, vor dem Eintreffen der Roten Armee zu versuchen, die Kontrolle über Warschau von den Deutschen zu übernehmen. So begann am 1. August 1944, als sich die Rote Armee der Stadt näherte, der Warschauer Aufstand. Der bewaffnete Kampf, der 48 Stunden dauern sollte, war teilweise erfolgreich, dauerte jedoch 63 Tage. Schließlich mussten die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten kapitulieren. Sie wurden in Kriegsgefangenenlager in Deutschland transportiert, während die gesamte Zivilbevölkerung ausgewiesen wurde. Die Zahl der polnischen Zivilisten wird auf 150.000 bis 200.000 geschätzt.' 'question': "Wer wurde nach Deutschland transportiert?" #output {'answer': 'die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten', 'end': 693, 'score': 0.23357819020748138, 'start': 625} ``` Try it on a Colab: <a href="https://github.com/Sahajtomar/Question-Answering/blob/main/Sahajtomar_GBERTQnA.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
{"language": "de", "tags": ["pytorch", "tf", "bert"], "datasets": ["mlqa"], "metrics": ["f1", "em"]}
Sahajtomar/GBERTQnA
null
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "de", "dataset:mlqa", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #tf #jax #bert #question-answering #de #dataset-mlqa #endpoints_compatible #region-us
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GBERT Large by URL ## MLQA DEV (german) EM: 63.82 F1: 77.20 ## XQUAD TEST (german) EM: 65.96 F1: 80.85 ## Model inferencing: Try it on a Colab: <a href="URL target="_parent"><img src="URL alt="Open In Colab" data-canonical-src="URL
[ "### QA Model trained on MLQA dataset for german langauge.\n\nMODEL used for fine tuning is GBERT Large by URL", "## MLQA DEV (german)\nEM: 63.82 \nF1: 77.20", "## XQUAD TEST (german)\nEM: 65.96 \nF1: 80.85", "## Model inferencing:\n\n\nTry it on a Colab:\n \n <a href=\"URL target=\"_parent\"><img src=\"URL alt=\"Open In Colab\" data-canonical-src=\"URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #question-answering #de #dataset-mlqa #endpoints_compatible #region-us \n", "### QA Model trained on MLQA dataset for german langauge.\n\nMODEL used for fine tuning is GBERT Large by URL", "## MLQA DEV (german)\nEM: 63.82 \nF1: 77.20", "## XQUAD TEST (german)\nEM: 65.96 \nF1: 80.85", "## Model inferencing:\n\n\nTry it on a Colab:\n \n <a href=\"URL target=\"_parent\"><img src=\"URL alt=\"Open In Colab\" data-canonical-src=\"URL" ]
question-answering
transformers
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GELECTRA Large by deepset.ai ## MLQA DEV (german) EM: 64.27 \ F1: 77.39 ## XQUAD TEST (german) EM: 66.38 \ F1: 82.25 ## Hyperparameters per_gpu_train_batch_size 4 \ per_gpu_eval_batch_size 32 \ gradient_accumulation_steps 8 \ learning_rate 3e-5 \ num_train_epochs 1.0 \ max_seq_length 384 \ doc_stride 128 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GELECTRAQA", tokenizer="Sahajtomar/GELECTRAQA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.987, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": "Es wird erwartet, dass sich schwarze Löcher mit Sternmasse bilden, wenn sehr massive Sterne am Ende ihres Lebenszyklus zusammenbrechen. Nachdem sich ein Schwarzes Loch gebildet hat, kann es weiter wachsen,indem es Masse aus seiner Umgebung absorbiert. Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern können sich supermassereiche Schwarze Löcher mit Millionen von Sonnenmassen (M☉) bilden. Es besteht Konsens darüber, dass in den Zentren der meisten Galaxien supermassereiche Schwarze Löcher existieren.", 'question': "Wie Sonnenmassen entstehen?" }) #output {'answer': 'Durch Absorption anderer Sterne und Verschmelzung mit anderen Schwarzen Löchern', 'end': 332, 'score': 0.23970196, 'start': 253} ```
{"language": "de", "tags": ["pytorch", "tf", "Gelectra"], "datasets": ["mlqa"], "metrics": ["f1", "em"]}
Sahajtomar/German-question-answer-Electra
null
[ "transformers", "pytorch", "tf", "electra", "question-answering", "Gelectra", "de", "dataset:mlqa", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #tf #electra #question-answering #Gelectra #de #dataset-mlqa #endpoints_compatible #region-us
### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GELECTRA Large by URL ## MLQA DEV (german) EM: 64.27 \ F1: 77.39 ## XQUAD TEST (german) EM: 66.38 \ F1: 82.25 ## Hyperparameters per_gpu_train_batch_size 4 \ per_gpu_eval_batch_size 32 \ gradient_accumulation_steps 8 \ learning_rate 3e-5 \ num_train_epochs 1.0 \ max_seq_length 384 \ doc_stride 128 ## Model inferencing:
[ "### QA Model trained on MLQA dataset for german langauge.\n\nMODEL used for fine tuning is GELECTRA Large by URL", "## MLQA DEV (german)\nEM: 64.27 \\\nF1: 77.39", "## XQUAD TEST (german)\nEM: 66.38 \\\nF1: 82.25", "## Hyperparameters\n\nper_gpu_train_batch_size 4 \\\nper_gpu_eval_batch_size 32 \\\ngradient_accumulation_steps 8 \\\nlearning_rate 3e-5 \\\nnum_train_epochs 1.0 \\\nmax_seq_length 384 \\\ndoc_stride 128", "## Model inferencing:" ]
[ "TAGS\n#transformers #pytorch #tf #electra #question-answering #Gelectra #de #dataset-mlqa #endpoints_compatible #region-us \n", "### QA Model trained on MLQA dataset for german langauge.\n\nMODEL used for fine tuning is GELECTRA Large by URL", "## MLQA DEV (german)\nEM: 64.27 \\\nF1: 77.39", "## XQUAD TEST (german)\nEM: 66.38 \\\nF1: 82.25", "## Hyperparameters\n\nper_gpu_train_batch_size 4 \\\nper_gpu_eval_batch_size 32 \\\ngradient_accumulation_steps 8 \\\nlearning_rate 3e-5 \\\nnum_train_epochs 1.0 \\\nmax_seq_length 384 \\\ndoc_stride 128", "## Model inferencing:" ]
sentence-similarity
sentence-transformers
# German STS ## STS dev (german) 87.9% ## STS test (german) 84.3% #### STS pipeline ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer model = SentenceTransformer('..model_path..') sentences1 = ['Die Katze sitzt draußen', "Ein Mann spielt Gitarre", 'Der neue Film ist großartig'] sentences2 = ['Der Hund spielt im Garten', "Eine Frau sieht fern", 'Der neue Film ist so toll'] embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2) for i in range(len(sentences1)): for j in range(len(sentences2)): print(cosine_scores[i][j])) """ Die Katze sitzt draußen Der Hund spielt im Garten Score: 0.1259 Die Katze sitzt draußen Eine Frau sieht fern Score: 0.0567 Die Katze sitzt draußen Der neue Film ist so toll Score: 0.0557 Ein Mann spielt Gitarre Der Hund spielt im Garten Score: 0.1031 Ein Mann spielt Gitarre Eine Frau sieht fern Score: 0.0098 Ein Mann spielt Gitarre Der neue Film ist so toll Score: 0.0828 Der neue Film ist großartig Der Hund spielt im Garten Score: 0.1008 Der neue Film ist großartig Eine Frau sieht fern Score: 0.0674 """ ```
{"language": "de", "tags": ["semantic", "sentence-transformers", "sentence-similarity"], "datasets": ["sts"]}
Sahajtomar/German-semantic
null
[ "sentence-transformers", "bert", "semantic", "sentence-similarity", "de", "dataset:sts", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #sentence-transformers #bert #semantic #sentence-similarity #de #dataset-sts #endpoints_compatible #has_space #region-us
# German STS ## STS dev (german) 87.9% ## STS test (german) 84.3% #### STS pipeline
[ "# German STS", "## STS dev (german)\n87.9%", "## STS test (german)\n84.3%", "#### STS pipeline" ]
[ "TAGS\n#sentence-transformers #bert #semantic #sentence-similarity #de #dataset-sts #endpoints_compatible #has_space #region-us \n", "# German STS", "## STS dev (german)\n87.9%", "## STS test (german)\n84.3%", "#### STS pipeline" ]
zero-shot-classification
transformers
# German Zeroshot ## Model Description This model has [GBERT Large](https://huggingface.co/deepset/gbert-large) as base model and fine-tuned it on xnli de dataset. The default hypothesis template is in English: `This text is {}`. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly. ## XNLI DEV (german) Accuracy: 85.5 ## XNLI TEST (german) Accuracy: 83.6 #### Zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Sahajtomar/German_Zeroshot") sequence = "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels = ["Verbrechen","Tragödie","Stehlen"] hypothesis_template = "In deisem geht es um {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) """{'labels': ['Tragödie', 'Verbrechen', 'Stehlen'], 'scores': [0.8328856854438782, 0.10494536352157593, 0.06316883927583696], 'sequence': 'Letzte Woche gab es einen Selbstmord in einer nahe gelegenen Kolonie'}""" ```
{"language": "multilingual", "tags": ["text-classification", "pytorch", "nli", "xnli", "de"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie", "candidate_labels": "Verbrechen,Trag\u00f6die,Stehlen", "hypothesis_template": "In deisem geht es um {}."}]}
Sahajtomar/German_Zeroshot
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "nli", "xnli", "de", "zero-shot-classification", "multilingual", "dataset:xnli", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #transformers #pytorch #jax #bert #text-classification #nli #xnli #de #zero-shot-classification #multilingual #dataset-xnli #autotrain_compatible #endpoints_compatible #has_space #region-us
# German Zeroshot ## Model Description This model has GBERT Large as base model and fine-tuned it on xnli de dataset. The default hypothesis template is in English: 'This text is {}'. While using this model , change it to "In deisem geht es um {}." or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly. ## XNLI DEV (german) Accuracy: 85.5 ## XNLI TEST (german) Accuracy: 83.6 #### Zero-shot classification pipeline
[ "# German Zeroshot", "## Model Description\n\nThis model has GBERT Large as base model and fine-tuned it on xnli de dataset.\nThe default hypothesis template is in English: 'This text is {}'. While using this model , change it to \"In deisem geht es um {}.\" or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly.", "## XNLI DEV (german)\nAccuracy: 85.5", "## XNLI TEST (german)\nAccuracy: 83.6", "#### Zero-shot classification pipeline" ]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #nli #xnli #de #zero-shot-classification #multilingual #dataset-xnli #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# German Zeroshot", "## Model Description\n\nThis model has GBERT Large as base model and fine-tuned it on xnli de dataset.\nThe default hypothesis template is in English: 'This text is {}'. While using this model , change it to \"In deisem geht es um {}.\" or something different. While inferencing through huggingface api may give poor results as it uses by default english template. Since model is monolingual and not multilingual, hypothesis template needs to be changed accordingly.", "## XNLI DEV (german)\nAccuracy: 85.5", "## XNLI TEST (german)\nAccuracy: 83.6", "#### Zero-shot classification pipeline" ]
token-classification
transformers
### NER model trained on BERT MODEL used for fine tuning is GBERT Large by deepset.ai ## Test Accuracy: 98 \ F1: 84.1 \ Precision: 82.7 \ Recall: 85.5 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline ner = pipeline( "ner", model="Sahajtomar/NER_legal_de", tokenizer="Sahajtomar/NER_legal_de") nlp_ner("Für eine Zuständigkeit des Verwaltungsgerichts Berlin nach § 52 Nr. 1 bis 4 VwGO hat der \ Antragsteller keine Anhaltspunkte vorgetragen .") ```
{"language": "de", "tags": ["pytorch", "tf", "bert", "NER"], "datasets": ["legal entity recognition"]}
Sahajtomar/NER_legal_de
null
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "NER", "de", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #tf #jax #bert #token-classification #NER #de #autotrain_compatible #endpoints_compatible #region-us
### NER model trained on BERT MODEL used for fine tuning is GBERT Large by URL ## Test Accuracy: 98 \ F1: 84.1 \ Precision: 82.7 \ Recall: 85.5 ## Model inferencing:
[ "### NER model trained on BERT \n\nMODEL used for fine tuning is GBERT Large by URL", "## Test\nAccuracy: 98 \\\nF1: 84.1 \\\nPrecision: 82.7 \\\nRecall: 85.5", "## Model inferencing:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #NER #de #autotrain_compatible #endpoints_compatible #region-us \n", "### NER model trained on BERT \n\nMODEL used for fine tuning is GBERT Large by URL", "## Test\nAccuracy: 98 \\\nF1: 84.1 \\\nPrecision: 82.7 \\\nRecall: 85.5", "## Model inferencing:" ]
sentence-similarity
sentence-transformers
# French STS ## STS dev (french) 87.4% ## STS test (french) 85.8% #### STS pipeline ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer model = SentenceTransformer('..model_path..') sentences1 = ["J'aime mon téléphone", "Mon téléphone n'est pas bon.", "Votre téléphone portable est superbe."] sentences2 = ["Est-ce qu'il neige demain?", "Récemment, de nombreux ouragans ont frappé les États-Unis", "Le réchauffement climatique est réel",] embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2) for i in range(len(sentences1)): for j in range(len(sentences2)): print(cosine_scores[i][j])) """ """ ```
{"language": "fr", "tags": ["semantic", "sentence-transformers", "sentence-similarity", "fr"], "datasets": ["sts"]}
Sahajtomar/french_semantic
null
[ "sentence-transformers", "semantic", "sentence-similarity", "fr", "dataset:sts", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #sentence-transformers #semantic #sentence-similarity #fr #dataset-sts #endpoints_compatible #has_space #region-us
# French STS ## STS dev (french) 87.4% ## STS test (french) 85.8% #### STS pipeline
[ "# French STS", "## STS dev (french)\n87.4%", "## STS test (french)\n85.8%", "#### STS pipeline" ]
[ "TAGS\n#sentence-transformers #semantic #sentence-similarity #fr #dataset-sts #endpoints_compatible #has_space #region-us \n", "# French STS", "## STS dev (french)\n87.4%", "## STS test (french)\n85.8%", "#### STS pipeline" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-kaggle This model was trained from scratch on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": ["hi"], "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hindi-kaggle", "results": []}]}
Saitomar/wav2vec2-large-xls-r-300m-hindi-kaggle
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "hi", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "hi" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #hi #dataset-common_voice #endpoints_compatible #region-us
# wav2vec2-large-xls-r-300m-hindi-kaggle This model was trained from scratch on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
[ "# wav2vec2-large-xls-r-300m-hindi-kaggle\n\nThis model was trained from scratch on the common_voice dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #hi #dataset-common_voice #endpoints_compatible #region-us \n", "# wav2vec2-large-xls-r-300m-hindi-kaggle\n\nThis model was trained from scratch on the common_voice dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu113\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
question-answering
transformers
### How to use #### Requirements Transformers require `transformers` and `sentencepiece`, both of which can be installed using `pip`. ```sh pip install transformers sentencepiece ``` #### Pipelines 🚀 In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. ```python from transformers import pipeline model_name = "SajjadAyoubi/bert-base-fa-qa" qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] for question in questions: print(qa_pipeline({"context": text, "question": question})) >>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'} >>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'} >>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'} ``` #### Manual approach 🔥 Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering from src.utils import AnswerPredictor model_name = "SajjadAyoubi/bert-base-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py and you can read more about it predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ``` 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` - TensorFlow 2.X ```python from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering from src.utils import TFAnswerPredictor model_name = "SajjadAyoubi/bert-base-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py, you can read more about it predictor = TFAnswerPredictor(model, tokenizer, n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ```text 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
{}
SajjadAyoubi/bert-base-fa-qa
null
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #jax #bert #question-answering #endpoints_compatible #region-us
### How to use #### Requirements Transformers require 'transformers' and 'sentencepiece', both of which can be installed using 'pip'. #### Pipelines In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. #### Manual approach Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch Produces an output such below: - TensorFlow 2.X Produces an output such below: Or you can access the whole demonstration using HowToUse iPython Notebook on Google Colab
[ "### How to use", "#### Requirements\n\nTransformers require 'transformers' and 'sentencepiece', both of which can be\ninstalled using 'pip'.", "#### Pipelines \n\nIn case you are not familiar with Transformers, you can use pipelines instead.\n\nNote that, pipelines can't have _no answer_ for the questions.", "#### Manual approach \n\nUsing the Manual approach, it is possible to have _no answer_ with even better\nperformance.\n\n- PyTorch\n\n\n\nProduces an output such below:\n\n\n- TensorFlow 2.X\n\n\n\nProduces an output such below:\n\n\n\nOr you can access the whole demonstration using HowToUse iPython Notebook on Google Colab" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #question-answering #endpoints_compatible #region-us \n", "### How to use", "#### Requirements\n\nTransformers require 'transformers' and 'sentencepiece', both of which can be\ninstalled using 'pip'.", "#### Pipelines \n\nIn case you are not familiar with Transformers, you can use pipelines instead.\n\nNote that, pipelines can't have _no answer_ for the questions.", "#### Manual approach \n\nUsing the Manual approach, it is possible to have _no answer_ with even better\nperformance.\n\n- PyTorch\n\n\n\nProduces an output such below:\n\n\n- TensorFlow 2.X\n\n\n\nProduces an output such below:\n\n\n\nOr you can access the whole demonstration using HowToUse iPython Notebook on Google Colab" ]
feature-extraction
transformers
# CLIPfa: Connecting Farsi Text and Images OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [‍‍`ViT‍`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ```python from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor # download pre-trained models vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision') preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision') text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text') tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text') # define input image and input text text = 'something' image = PIL.Image.open('my_favorite_image.jpg') # compute embeddings text_embedding = text_encoder(**tokenizer(text, return_tensors='pt')).pooler_output image_embedding = vision_encoder(**preprocessor(image, return_tensors='pt')).pooler_output text_embedding.shape == image_embedding.shape ``` ## Demo: The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets) - use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git` ```python from clipfa import CLIPDemo demo = CLIPDemo(vision_encoder, text_encoder, tokenizer) demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی']) demo.compute_image_embeddings(test_df.image_path.to_list()) ``` ## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo) We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ❤️ in my basement🤫
{}
SajjadAyoubi/clip-fa-text
null
[ "transformers", "pytorch", "roberta", "feature-extraction", "arxiv:2103.00020", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2103.00020" ]
[]
TAGS #transformers #pytorch #roberta #feature-extraction #arxiv-2103.00020 #endpoints_compatible #has_space #region-us
# CLIPfa: Connecting Farsi Text and Images OpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ## Demo: The followings are just some use cases of CLIPfa on 25K 'Unsplash images' - use 'pip install -q git+URL ## Online Demo: CLIPfa at Huggingface spaces We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ️ in my basement
[ "# CLIPfa: Connecting Farsi Text and Images\nOpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them.\n\n\n- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.", "## How to use?\nBoth models generate vectors with 768 dimensions.", "## Demo:\nThe followings are just some use cases of CLIPfa on 25K 'Unsplash images'\n- use 'pip install -q git+URL", "## Online Demo: CLIPfa at Huggingface spaces\nWe used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. \n\n> Made with ️ in my basement" ]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #arxiv-2103.00020 #endpoints_compatible #has_space #region-us \n", "# CLIPfa: Connecting Farsi Text and Images\nOpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them.\n\n\n- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.", "## How to use?\nBoth models generate vectors with 768 dimensions.", "## Demo:\nThe followings are just some use cases of CLIPfa on 25K 'Unsplash images'\n- use 'pip install -q git+URL", "## Online Demo: CLIPfa at Huggingface spaces\nWe used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. \n\n> Made with ️ in my basement" ]
feature-extraction
transformers
# CLIPfa: Connecting Farsi Text and Images OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [‍‍`ViT‍`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ```python from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor # download pre-trained models vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision') preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision') text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text') tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text') # define input image and input text text = 'something' image = PIL.Image.open('my_favorite_image.jpg') # compute embeddings text_embedding = text_encoder(**tokenizer(text, return_tensors='pt')).pooler_output image_embedding = vision_encoder(**preprocessor(image, return_tensors='pt')).pooler_output text_embedding.shape == image_embedding.shape ``` ## Demo: The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets) - use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git` ```python from clipfa import CLIPDemo demo = CLIPDemo(vision_encoder, text_encoder, tokenizer) demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی']) demo.compute_image_embeddings(test_df.image_path.to_list()) ``` ## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo) We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ❤️ in my basement🤫
{}
SajjadAyoubi/clip-fa-vision
null
[ "transformers", "pytorch", "clip_vision_model", "feature-extraction", "arxiv:2103.00020", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2103.00020" ]
[]
TAGS #transformers #pytorch #clip_vision_model #feature-extraction #arxiv-2103.00020 #endpoints_compatible #region-us
# CLIPfa: Connecting Farsi Text and Images OpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them. - It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip. ## How to use? Both models generate vectors with 768 dimensions. ## Demo: The followings are just some use cases of CLIPfa on 25K 'Unsplash images' - use 'pip install -q git+URL ## Online Demo: CLIPfa at Huggingface spaces We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. > Made with ️ in my basement
[ "# CLIPfa: Connecting Farsi Text and Images\nOpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them.\n\n\n- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.", "## How to use?\nBoth models generate vectors with 768 dimensions.", "## Demo:\nThe followings are just some use cases of CLIPfa on 25K 'Unsplash images'\n- use 'pip install -q git+URL", "## Online Demo: CLIPfa at Huggingface spaces\nWe used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. \n\n> Made with ️ in my basement" ]
[ "TAGS\n#transformers #pytorch #clip_vision_model #feature-extraction #arxiv-2103.00020 #endpoints_compatible #region-us \n", "# CLIPfa: Connecting Farsi Text and Images\nOpenAI released 'the paper Learning Transferable Visual Models From Natural Language Supervision' in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used 'Farahani's RoBERTa-fa' as the text encoder and ‍‍'ViT‍' as the vision encoder from Original CLIP and finetuned them.\n\n\n- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.", "## How to use?\nBoth models generate vectors with 768 dimensions.", "## Demo:\nThe followings are just some use cases of CLIPfa on 25K 'Unsplash images'\n- use 'pip install -q git+URL", "## Online Demo: CLIPfa at Huggingface spaces\nWe used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database. \n\n> Made with ️ in my basement" ]
fill-mask
transformers
<span align="center"> <a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a> <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> </span> # ParsBigBird: Persian Bert For **Long-Range** Sequences The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention. ## Evaluation: 🌡️ We have evaluated the model on three tasks with different sequence lengths | Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) | | :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: | | [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** | | [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% | - Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context - This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096) ## How to use❓ ### As Contextualized Word Embedding ```python from transformers import BigBirdModel, AutoTokenizer MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj" # by default its in `block_sparse` block_size=32 model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32) # you can use full attention like the following: use this when input isn't longer than 512 model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full") text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokens = tokenizer(text, return_tensors='pt') output = model(**tokens) # contextualized embedding ``` ### As Fill Blank ```python from transformers import pipeline MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj' fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME) results = fill('تهران پایتخت [MASK] است.') print(results[0]['token_str']) >>> 'ایران' ``` ## Pretraining details: 🔭 This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base). - For more details, you can take a look at config.json at the model card in 🤗 Model Hub ## Fine Tuning Recommendations: 🐤 Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse). ### Fine Tuning Examples 👷‍♂️👷‍♀️ | Dataset | Fine Tuning Example | | ------------------------------------- | ------------------------------------------------------------ | | Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> | ## Contact us: 🤝 If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us. ## Citation: ↩️ we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below. ```bibtex @misc{ParsBigBird, author = {Ayoubi, Sajjad}, title = {ParsBigBird: Persian Bert For Long-Range Sequences}, year = 2021, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}}, } ```
{}
SajjadAyoubi/distil-bigbird-fa-zwnj
null
[ "transformers", "pytorch", "big_bird", "fill-mask", "arxiv:1810.04805", "arxiv:2005.12515", "arxiv:2007.14062", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805", "2005.12515", "2007.14062" ]
[]
TAGS #transformers #pytorch #big_bird #fill-mask #arxiv-1810.04805 #arxiv-2005.12515 #arxiv-2007.14062 #autotrain_compatible #endpoints_compatible #region-us
ParsBigBird: Persian Bert For Long-Range Sequences ================================================== The Bert and ParsBert algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the BigBird model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention. Evaluation: ️ ------------- We have evaluated the model on three tasks with different sequence lengths * Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context * This evaluation was based on 'max\_lentgh=2048' (It can be changed up to 4096) How to use ---------- ### As Contextualized Word Embedding ### As Fill Blank Pretraining details: -------------------- This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this paper and released in this repository. Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from 'distilbert-fa'’s checkpoint. * For more details, you can take a look at URL at the model card in Model Hub Fine Tuning Recommendations: ---------------------------- Due to the model's memory requirements, 'gradient\_checkpointing' and 'gradient\_accumulation' should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block\_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original\_full should be used up to 512 sequence lengths (instead of block sparse). ### Fine Tuning Examples ‍️‍️ Contact us: ----------- If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us. : ↩️ we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
[ "### As Contextualized Word Embedding", "### As Fill Blank\n\n\nPretraining details:\n--------------------\n\n\nThis model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this paper and released in this repository. Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from 'distilbert-fa'’s checkpoint.\n\n\n* For more details, you can take a look at URL at the model card in Model Hub\n\n\nFine Tuning Recommendations:\n----------------------------\n\n\nDue to the model's memory requirements, 'gradient\\_checkpointing' and 'gradient\\_accumulation' should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block\\_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original\\_full should be used up to 512 sequence lengths (instead of block sparse).", "### Fine Tuning Examples ‍️‍️\n\n\n\nContact us:\n-----------\n\n\nIf you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.\n\n\n: ↩️\nwe didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below." ]
[ "TAGS\n#transformers #pytorch #big_bird #fill-mask #arxiv-1810.04805 #arxiv-2005.12515 #arxiv-2007.14062 #autotrain_compatible #endpoints_compatible #region-us \n", "### As Contextualized Word Embedding", "### As Fill Blank\n\n\nPretraining details:\n--------------------\n\n\nThis model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this paper and released in this repository. Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from 'distilbert-fa'’s checkpoint.\n\n\n* For more details, you can take a look at URL at the model card in Model Hub\n\n\nFine Tuning Recommendations:\n----------------------------\n\n\nDue to the model's memory requirements, 'gradient\\_checkpointing' and 'gradient\\_accumulation' should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block\\_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original\\_full should be used up to 512 sequence lengths (instead of block sparse).", "### Fine Tuning Examples ‍️‍️\n\n\n\nContact us:\n-----------\n\n\nIf you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.\n\n\n: ↩️\nwe didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below." ]
question-answering
transformers
### How to use #### Requirements Transformers require `transformers` and `sentencepiece`, both of which can be installed using `pip`. ```sh pip install transformers sentencepiece ``` #### Pipelines 🚀 In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. ```python from transformers import pipeline model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] for question in questions: print(qa_pipeline({"context": text, "question": question})) >>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'} >>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'} >>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'} ``` #### Manual approach 🔥 Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering from src.utils import AnswerPredictor model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py and you can read more about it predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ``` 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` - TensorFlow 2.X ```python from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering from src.utils import TFAnswerPredictor model_name = "SajjadAyoubi/lm-roberta-large-fa-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForQuestionAnswering.from_pretrained(model_name) text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم" questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"] # this class is from src/utils.py, you can read more about it predictor = TFAnswerPredictor(model, tokenizer, n_best=10) preds = predictor(questions, [text] * 3, batch_size=3) for k, v in preds.items(): print(v) ``` Produces an output such below: ```text 100%|██████████| 1/1 [00:00<00:00, 3.56it/s] {'score': 8.040637016296387, 'text': 'سجاد ایوبی'} {'score': 9.901972770690918, 'text': '۲۰'} {'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'} ``` Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
{}
SajjadAyoubi/xlm-roberta-large-fa-qa
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #xlm-roberta #question-answering #endpoints_compatible #region-us
### How to use #### Requirements Transformers require 'transformers' and 'sentencepiece', both of which can be installed using 'pip'. #### Pipelines In case you are not familiar with Transformers, you can use pipelines instead. Note that, pipelines can't have _no answer_ for the questions. #### Manual approach Using the Manual approach, it is possible to have _no answer_ with even better performance. - PyTorch Produces an output such below: - TensorFlow 2.X Produces an output such below: Or you can access the whole demonstration using HowToUse iPython Notebook on Google Colab
[ "### How to use", "#### Requirements\n\nTransformers require 'transformers' and 'sentencepiece', both of which can be\ninstalled using 'pip'.", "#### Pipelines \n\nIn case you are not familiar with Transformers, you can use pipelines instead.\n\nNote that, pipelines can't have _no answer_ for the questions.", "#### Manual approach \n\nUsing the Manual approach, it is possible to have _no answer_ with even better\nperformance.\n\n- PyTorch\n\n\n\nProduces an output such below:\n\n\n- TensorFlow 2.X\n\n\n\nProduces an output such below:\n\n\n\nOr you can access the whole demonstration using HowToUse iPython Notebook on Google Colab" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #question-answering #endpoints_compatible #region-us \n", "### How to use", "#### Requirements\n\nTransformers require 'transformers' and 'sentencepiece', both of which can be\ninstalled using 'pip'.", "#### Pipelines \n\nIn case you are not familiar with Transformers, you can use pipelines instead.\n\nNote that, pipelines can't have _no answer_ for the questions.", "#### Manual approach \n\nUsing the Manual approach, it is possible to have _no answer_ with even better\nperformance.\n\n- PyTorch\n\n\n\nProduces an output such below:\n\n\n- TensorFlow 2.X\n\n\n\nProduces an output such below:\n\n\n\nOr you can access the whole demonstration using HowToUse iPython Notebook on Google Colab" ]
text-classification
transformers
* IMDB_URDUSENTIMENT_MODEL I have used IMDB URDU dataset to create custom model by using DistilBertForSequenceClassification.
{"language": ["en"], "license": "apache-2.0", "tags": ["text Classification"], "widget": [{"text": "\u0645\u06cc\u06ba \u062a\u0645\u06c1\u06cc\u06ba \u067e\u0633\u0646\u062f \u06a9\u0631\u062a\u0627 \u06c1\u0648\u06ba. </s></s> \u0645\u06cc\u06ba \u062a\u0645 \u0633\u06d2 \u067e\u06cc\u0627\u0631 \u06a9\u0631\u062a\u0627 \u06c1\u0648\u06ba."}]}
Sakil/IMDB_URDUSENTIMENT_MODEL
null
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "text Classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #distilbert #text-classification #text Classification #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
* IMDB_URDUSENTIMENT_MODEL I have used IMDB URDU dataset to create custom model by using DistilBertForSequenceClassification.
[]
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #text-classification #text Classification #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Dataset Collection: * The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter. * The dataset has the two classes hatespeech and non hatespeech. * The class distribution is equal * Different strategies have been followed during the data gathering phase. * The dataset is collected from relevant sources. # distilbert-base-uncased model is fine-tuned for Hate Speech Detection * The model is fine-tuned on the dataset. * This model can be used to create the labels for academic purposes or for industrial purposes. * This model can be used for the inference purpose as well. # Data Fields: **label**: 0 - it is a hate speech, 1 - not a hate speech # Application: * This model is useful for the detection of hatespeech in the tweets. * There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels. * You can fine-tune this model for your particular use cases. # Model Implementation # !pip install transformers[sentencepiece] from transformers import pipeline model_name="Sakil/distilbert_lazylearner_hatespeech_detection" classifier = pipeline("text-classification",model=model_name) classifier("!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...") # Github: [Sakil Ansari](https://github.com/Sakil786/hate_speech_detection_pretrained_model)
{"language": "en", "license": "apache-2.0", "tags": ["hate", "speech"], "widget": [{"text": "RT @ShenikaRoberts: The shit you hear about me might be true or it might be faker than the bitch who told it to ya &#5736"}]}
Sakil/distilbert_lazylearner_hatespeech_detection
null
[ "transformers", "pytorch", "distilbert", "text-classification", "hate", "speech", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #hate #speech #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Dataset Collection: * The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter. * The dataset has the two classes hatespeech and non hatespeech. * The class distribution is equal * Different strategies have been followed during the data gathering phase. * The dataset is collected from relevant sources. # distilbert-base-uncased model is fine-tuned for Hate Speech Detection * The model is fine-tuned on the dataset. * This model can be used to create the labels for academic purposes or for industrial purposes. * This model can be used for the inference purpose as well. # Data Fields: label: 0 - it is a hate speech, 1 - not a hate speech # Application: * This model is useful for the detection of hatespeech in the tweets. * There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels. * You can fine-tune this model for your particular use cases. # Model Implementation # !pip install transformers[sentencepiece] from transformers import pipeline model_name="Sakil/distilbert_lazylearner_hatespeech_detection" classifier = pipeline("text-classification",model=model_name) classifier("!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...") # Github: Sakil Ansari
[ "# Dataset Collection:\n* The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter.\n* The dataset has the two classes hatespeech and non hatespeech.\n* The class distribution is equal\n* Different strategies have been followed during the data gathering phase.\n* The dataset is collected from relevant sources.", "# distilbert-base-uncased model is fine-tuned for Hate Speech Detection\n* The model is fine-tuned on the dataset.\n* This model can be used to create the labels for academic purposes or for industrial purposes.\n* This model can be used for the inference purpose as well.", "# Data Fields:\n \nlabel: 0 - it is a hate speech, 1 - not a hate speech", "# Application:\n* This model is useful for the detection of hatespeech in the tweets.\n* There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels.\n* You can fine-tune this model for your particular use cases.", "# Model Implementation", "# !pip install transformers[sentencepiece]\n\nfrom transformers import pipeline\n\nmodel_name=\"Sakil/distilbert_lazylearner_hatespeech_detection\"\n\nclassifier = pipeline(\"text-classification\",model=model_name)\n\nclassifier(\"!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...\")", "# Github: Sakil Ansari" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #hate #speech #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Dataset Collection:\n* The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter.\n* The dataset has the two classes hatespeech and non hatespeech.\n* The class distribution is equal\n* Different strategies have been followed during the data gathering phase.\n* The dataset is collected from relevant sources.", "# distilbert-base-uncased model is fine-tuned for Hate Speech Detection\n* The model is fine-tuned on the dataset.\n* This model can be used to create the labels for academic purposes or for industrial purposes.\n* This model can be used for the inference purpose as well.", "# Data Fields:\n \nlabel: 0 - it is a hate speech, 1 - not a hate speech", "# Application:\n* This model is useful for the detection of hatespeech in the tweets.\n* There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels.\n* You can fine-tune this model for your particular use cases.", "# Model Implementation", "# !pip install transformers[sentencepiece]\n\nfrom transformers import pipeline\n\nmodel_name=\"Sakil/distilbert_lazylearner_hatespeech_detection\"\n\nclassifier = pipeline(\"text-classification\",model=model_name)\n\nclassifier(\"!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. &amp; as a man you should always take the trash out...\")", "# Github: Sakil Ansari" ]
text-classification
transformers
* IMDBSentimentDistilBertModel: - I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification. from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
{"language": ["en"], "license": "apache-2.0", "tags": ["text Classification"], "widget": [{"text": "I like you. </s></s> I love you."}]}
Sakil/imdbsentdistilbertmodel
null
[ "transformers", "pytorch", "distilbert", "text-classification", "text Classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #text Classification #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
* IMDBSentimentDistilBertModel: - I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification. from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #text Classification #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
test
{}
Sakil/testmodel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
test
[]
[ "TAGS\n#region-us \n" ]
fill-mask
transformers
# distilbert-base-nepali This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [distilbert model](https://arxiv.org/abs/1910.01108) for language modeling. Find more details in [this paper](https://aclanthology.org/2022.sigul-1.14/). It achieves the following results on the evaluation set: mlm probability|evaluation loss|evaluation perplexity --:|----:|-----:| 15%|2.349|10.479| 20%|2.605|13.351| ## Model description Refer to original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) ## Intended uses & limitations This backbone model intends to be fine-tuned on Nepali language focused downstream task such as sequence classification, token classification or question answering. The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences. ## Usage This model can be used directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Sakonii/distilbert-base-nepali') >>> unmasker("मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, <mask>, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।") [{'score': 0.04128897562623024, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, मौसम, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 2605, 'token_str': 'मौसम'}, {'score': 0.04100276157259941, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, प्रकृति, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 2792, 'token_str': 'प्रकृति'}, {'score': 0.026525357738137245, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, पानी, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 387, 'token_str': 'पानी'}, {'score': 0.02340106852352619, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, जल, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 1313, 'token_str': 'जल'}, {'score': 0.02055591531097889, 'sequence': 'मानविय गतिविधिले प्रातृतिक पर्यावरन प्रनालीलाई अपरिमेय क्षति पु्र्याएको छ। परिवर्तनशिल जलवायुले खाध, सुरक्षा, वातावरण, जमिन, मौसमलगायतलाई असंख्य तरिकाले प्रभावित छ।', 'token': 790, 'token_str': 'वातावरण'}] ``` Here is how we can use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('Sakonii/distilbert-base-nepali') model = AutoModelForMaskedLM.from_pretrained('Sakonii/distilbert-base-nepali') # prepare input text = "चाहिएको text यता राख्नु होला।" encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ``` ## Training data This model is trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) language modeling dataset which combines the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia. As for training the language model, the texts in the training set are grouped to a block of 512 tokens. ## Tokenization A Sentence Piece Model (SPM) is trained on a subset of [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset for text tokenization. The tokenizer trained with vocab-size=24576, min-frequency=4, limit-alphabet=1000 and model-max-length=512. ## Training procedure The model is trained with the same configuration as the original [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased); 512 tokens per instance, 28 instances per batch, and around 35.7K training steps. ### Training hyperparameters The following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ] - learning_rate: 5e-05 - train_batch_size: 28 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results The model is trained for 4 epochs with varying hyperparameters: | Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity | |:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:| | 3.4477 | 1.0 | 15 | 26 | 38864 | 3.3067 | 27.2949 | | 2.9451 | 2.0 | 15 | 28 | 35715 | 2.8238 | 16.8407 | | 2.866 | 3.0 | 20 | 28 | 35715 | 2.7431 | 15.5351 | | 2.7287 | 4.0 | 20 | 28 | 35715 | 2.6053 | 13.5353 | | 2.6412 | 5.0 | 20 | 28 | 35715 | 2.5161 | 12.3802 | Final model evaluated with MLM Probability of 15%: | Training Loss | Epoch | MLM Probability | Train Batch Size | Step | Validation Loss | Perplexity | |:-------------:|:-----:|:---------------:|:----------------:|:-----:|:---------------:|:----------:| | - | - | 15 | - | - | 2.3494 | 10.4791 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": "Sakonii/nepalitext-language-model-dataset", "mask_token": "<mask>", "widget": [{"text": "\u092e\u093e\u0928\u0935\u093f\u092f \u0917\u0924\u093f\u0935\u093f\u0927\u093f\u0932\u0947 \u092a\u094d\u0930\u093e\u0924\u0943\u0924\u093f\u0915 \u092a\u0930\u094d\u092f\u093e\u0935\u0930\u0928 \u092a\u094d\u0930\u0928\u093e\u0932\u0940\u0932\u093e\u0908 \u0905\u092a\u0930\u093f\u092e\u0947\u092f \u0915\u094d\u0937\u0924\u093f \u092a\u0941\u094d\u0930\u094d\u092f\u093e\u090f\u0915\u094b \u091b\u0964 \u092a\u0930\u093f\u0935\u0930\u094d\u0924\u0928\u0936\u093f\u0932 \u091c\u0932\u0935\u093e\u092f\u0941\u0932\u0947 \u0916\u093e\u0927, \u0938\u0941\u0930\u0915\u094d\u0937\u093e, <mask>, \u091c\u092e\u093f\u0928, \u092e\u094c\u0938\u092e\u0932\u0917\u093e\u092f\u0924\u0932\u093e\u0908 \u0905\u0938\u0902\u0916\u094d\u092f \u0924\u0930\u093f\u0915\u093e\u0932\u0947 \u092a\u094d\u0930\u092d\u093e\u0935\u093f\u0924 \u091b\u0964", "example_title": "Example 1"}, {"text": "\u0905\u091a\u0947\u0932 \u0935\u093f\u0926\u094d\u092f\u093e\u0932\u092f \u0930 \u0915\u0932\u0947\u091c\u0939\u0930\u0942\u0932\u0947 \u0938\u094d\u092e\u093e\u0930\u093f\u0915\u093e \u0915\u0924\u094d\u0924\u093f\u0915\u094b \u092a\u094d\u0930\u0915\u093e\u0936\u0928 \u0917\u0930\u094d\u091b\u0928\u094d, \u092f\u0915\u093f\u0928 \u091b\u0948\u0928\u202f\u0964 \u0915\u0947\u0939\u0940 \u0935\u0930\u094d\u0937\u092a\u0939\u093f\u0932\u0947\u0938\u092e\u094d\u092e \u0917\u093e\u0909\u0901\u0938\u0939\u0930\u0915\u093e \u0938\u093e\u0928\u093e\u0920\u0942\u0932\u093e <mask> \u0938\u0902\u0938\u094d\u0925\u093e\u0939\u0930\u0942\u092e\u093e \u092a\u0941\u0917\u094d\u0926\u093e \u0936\u093f\u0915\u094d\u0937\u0915 \u0935\u093e \u0915\u0930\u094d\u092e\u091a\u093e\u0930\u0940\u0932\u0947 \u0938\u0902\u0938\u094d\u0925\u093e\u092c\u093e\u091f \u092a\u094d\u0930\u0915\u093e\u0936\u093f\u0924 \u092a\u0924\u094d\u0930\u093f\u0915\u093e, \u0938\u094d\u092e\u093e\u0930\u093f\u0915\u093e \u0930 \u092a\u0941\u0938\u094d\u0924\u0915 \u0915\u094b\u0938\u0947\u0932\u0940\u0915\u093e \u0930\u0942\u092a\u092e\u093e \u0925\u092e\u093e\u0909\u0901\u0925\u0947\u202f\u0964", "example_title": "Example 2"}, {"text": "\u091c\u0932\u0935\u093f\u0926\u094d\u092f\u0941\u0924\u094d \u0935\u093f\u0915\u093e\u0938\u0915\u094b \u0967\u0967\u0966 \u0935\u0930\u094d\u0937\u0915\u094b \u0907\u0924\u093f\u0939\u093e\u0938 \u092c\u0928\u093e\u090f\u0915\u094b \u0928\u0947\u092a\u093e\u0932\u092e\u093e \u0939\u093e\u0932 \u0938\u0930\u0915\u093e\u0930\u0940 \u0930 \u0928\u093f\u091c\u0940 \u0915\u094d\u0937\u0947\u0924\u094d\u0930\u092c\u093e\u091f \u0917\u0930\u0940 \u0915\u0930\u093f\u092c \u0968 \u0939\u091c\u093e\u0930 \u092e\u0947\u0917\u093e\u0935\u093e\u091f <mask> \u0909\u0924\u094d\u092a\u093e\u0926\u0928 \u092d\u0907\u0930\u0939\u0947\u0915\u094b \u091b\u202f\u0964", "example_title": "Example 3"}], "model-index": [{"name": "distilbert-base-nepali", "results": []}]}
Sakonii/distilbert-base-nepali
null
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "dataset:Sakonii/nepalitext-language-model-dataset", "arxiv:1911.02116", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1911.02116", "1910.01108" ]
[]
TAGS #transformers #pytorch #safetensors #distilbert #fill-mask #generated_from_trainer #dataset-Sakonii/nepalitext-language-model-dataset #arxiv-1911.02116 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-nepali ====================== This model is pre-trained on nepalitext dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to XLM-ROBERTa and trains distilbert model for language modeling. Find more details in this paper. It achieves the following results on the evaluation set: Model description ----------------- Refer to original distilbert-base-uncased Intended uses & limitations --------------------------- This backbone model intends to be fine-tuned on Nepali language focused downstream task such as sequence classification, token classification or question answering. The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences. Usage ----- This model can be used directly with a pipeline for masked language modeling: Here is how we can use the model to get the features of a given text in PyTorch: Training data ------------- This model is trained on nepalitext language modeling dataset which combines the datasets: OSCAR , cc100 and a set of scraped Nepali articles on Wikipedia. As for training the language model, the texts in the training set are grouped to a block of 512 tokens. Tokenization ------------ A Sentence Piece Model (SPM) is trained on a subset of nepalitext dataset for text tokenization. The tokenizer trained with vocab-size=24576, min-frequency=4, limit-alphabet=1000 and model-max-length=512. Training procedure ------------------ The model is trained with the same configuration as the original distilbert-base-uncased; 512 tokens per instance, 28 instances per batch, and around 35.7K training steps. ### Training hyperparameters The following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ] * learning\_rate: 5e-05 * train\_batch\_size: 28 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results The model is trained for 4 epochs with varying hyperparameters: Final model evaluated with MLM Probability of 15%: ### Framework versions * Transformers 4.16.2 * Pytorch 1.9.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ]\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 28\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nThe model is trained for 4 epochs with varying hyperparameters:\n\n\n\nFinal model evaluated with MLM Probability of 15%:", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #fill-mask #generated_from_trainer #dataset-Sakonii/nepalitext-language-model-dataset #arxiv-1911.02116 #arxiv-1910.01108 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used for training of the final epoch: [ Refer to the *Training results* table below for varying hyperparameters every epoch ]\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 28\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\nThe model is trained for 4 epochs with varying hyperparameters:\n\n\n\nFinal model evaluated with MLM Probability of 15%:", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
# CodeT5-base for Code Summarization [CodeT5-base](https://huggingface.co/Salesforce/codet5-base) model fine-tuned on CodeSearchNet data in a multi-lingual training setting ( Ruby/JavaScript/Go/Python/Java/PHP) for code summarization. It was introduced in this EMNLP 2021 paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi. Please check out more at [this repository](https://github.com/salesforce/CodeT5). ## How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration if __name__ == '__main__': tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base-multi-sum') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base-multi-sum') text = """def svg_to_image(string, size=None): if isinstance(string, unicode): string = string.encode('utf-8') renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string)) if not renderer.isValid(): raise ValueError('Invalid SVG data.') if size is None: size = renderer.defaultSize() image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32) painter = QtGui.QPainter(image) renderer.render(painter) return image""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=20) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints: "Convert a SVG string to a QImage." ``` ## Fine-tuning data We employ the filtered version of CodeSearchNet data [[Husain et al., 2019](https://arxiv.org/abs/1909.09436)] from [CodeXGLUE](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text) benchmark for fine-tuning on code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer with the vocab files from [codet5-base](https://huggingface.co/Salesforce/codet5-base). ### Data statistic | Programming Language | Training | Dev | Test | | :------------------- | :------: | :----: | :----: | | Python | 251,820 | 13,914 | 14,918 | | PHP | 241,241 | 12,982 | 14,014 | | Go | 167,288 | 7,325 | 8,122 | | Java | 164,923 | 5,183 | 10,955 | | JavaScript | 58,025 | 3,885 | 3,291 | | Ruby | 24,927 | 1,400 | 1,261 | ## Training procedure We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the balanced sampling to avoid biasing towards high-resource tasks. Please refer to the [paper](https://arxiv.org/abs/2109.00859) for more details. ## Evaluation results Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below: | Model | Ruby | Javascript | Go | Python | Java | PHP | Overall | | ----------- | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: | | Seq2Seq | 9.64 | 10.21 | 13.98 | 15.93 | 15.09 | 21.08 | 14.32 | | Transformer | 11.18 | 11.59 | 16.38 | 15.81 | 16.26 | 22.12 | 15.56 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 11.17 | 11.90 | 17.72 | 18.14 | 16.47 | 24.02 | 16.57 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 12.16 | 14.90 | 18.07 | 19.06 | 17.65 | 25.16 | 17.83 | | [PLBART](https://aclanthology.org/2021.naacl-main.211.pdf) | 14.11 |15.56 | 18.91 | 19.30 | 18.45 | 23.58 | 18.32 | | [CodeT5-small](https://arxiv.org/abs/2109.00859) |14.87 | 15.32 | 19.25 | 20.04 | 19.92 | 25.46 | 19.14 | | [CodeT5-base](https://arxiv.org/abs/2109.00859) | **15.24** | 16.16 | 19.56 | 20.01 | **20.31** | 26.03 | 19.55 | | [CodeT5-base-multi-sum](https://arxiv.org/abs/2109.00859) | **15.24** | **16.18** | **19.95** | **20.42** | 20.26 | **26.10** | **19.69** | ## Citation ```bibtex @inproceedings{ wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021}, year={2021}, } ```
{"license": "bsd-3-clause", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": true}
Salesforce/codet5-base-multi-sum
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "arxiv:1907.11692", "arxiv:2002.08155", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.00859", "1909.09436", "1907.11692", "2002.08155" ]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #arxiv-1907.11692 #arxiv-2002.08155 #license-bsd-3-clause #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
CodeT5-base for Code Summarization ================================== CodeT5-base model fine-tuned on CodeSearchNet data in a multi-lingual training setting ( Ruby/JavaScript/Go/Python/Java/PHP) for code summarization. It was introduced in this EMNLP 2021 paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi. Please check out more at this repository. How to use ---------- Here is how to use this model: Fine-tuning data ---------------- We employ the filtered version of CodeSearchNet data [Husain et al., 2019] from CodeXGLUE benchmark for fine-tuning on code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer with the vocab files from codet5-base. ### Data statistic Training procedure ------------------ We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the balanced sampling to avoid biasing towards high-resource tasks. Please refer to the paper for more details. Evaluation results ------------------ Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below:
[ "### Data statistic\n\n\n\nTraining procedure\n------------------\n\n\nWe fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the\nbalanced sampling to avoid biasing towards high-resource tasks. Please refer to the paper for more details.\n\n\nEvaluation results\n------------------\n\n\nUnlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for\nall PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below:" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #arxiv-1907.11692 #arxiv-2002.08155 #license-bsd-3-clause #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### Data statistic\n\n\n\nTraining procedure\n------------------\n\n\nWe fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the\nbalanced sampling to avoid biasing towards high-resource tasks. Please refer to the paper for more details.\n\n\nEvaluation results\n------------------\n\n\nUnlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for\nall PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below:" ]
text2text-generation
transformers
# CodeT5 (base-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. Supervised datasets for code can be found [here](https://huggingface.co/datasets?languages=languages:code). See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=8) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "{user.username}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers) library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
Salesforce/codet5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.00859", "1909.09436" ]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
# CodeT5 (base-sized model) Pre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. Supervised datasets for code can be found here. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ## Training data The CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the HuggingFace Tokenizers library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info
[ "# CodeT5 (base-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSupervised datasets for code can be found here.\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the HuggingFace Tokenizers library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n", "# CodeT5 (base-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for (among other tasks) masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSupervised datasets for code can be found here.\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer trained using the HuggingFace Tokenizers library. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
text2text-generation
transformers
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=10) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "user: {user.name}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
Salesforce/codet5-small
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.00859", "1909.09436" ]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ## Training data The CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info
[ "# CodeT5 (small-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #has_space #text-generation-inference #region-us \n", "# CodeT5 (small-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
text2text-generation
transformers
# MixQG (3b-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-3b', tokenizer='Salesforce/mixqg-3b') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-3b') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-3b') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-3b
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2110.08175" ]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# MixQG (3b-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository. ### How to use Using Huggingface pipeline abstraction: Using the pre-trained model directly:
[ "# MixQG (3b-sized model)\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\nUsing the pre-trained model directly:" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# MixQG (3b-sized model)\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\nUsing the pre-trained model directly:" ]
text2text-generation
transformers
# MixQG (base-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-base', tokenizer='Salesforce/mixqg-base') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-base') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-base') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2110.08175" ]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# MixQG (base-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository. ### How to use Using Huggingface pipeline abstraction: Using the pre-trained model directly:
[ "# MixQG (base-sized model)\n\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\n\nUsing the pre-trained model directly:" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# MixQG (base-sized model)\n\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\n\nUsing the pre-trained model directly:" ]
text2text-generation
transformers
# MixQG (large-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository. ### How to use Using Huggingface pipeline abstraction: ``` from transformers import pipeline nlp = pipeline("text2text-generation", model='Salesforce/mixqg-large', tokenizer='Salesforce/mixqg-large') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) nlp(text) # should output [{'generated_text': 'Who proved that air is necessary for combustion?'}] ``` Using the pre-trained model directly: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-large') model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-large') CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion." ANSWER = "Robert Boyle" def format_inputs(context: str, answer: str): return f"{answer} \\n {context}" text = format_inputs(CONTEXT, ANSWER) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=32, num_beams=4) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(output) # should output "Who proved that air is necessary for combustion?" ``` ### Citation ``` @misc{murakhovska2021mixqg, title={MixQG: Neural Question Generation with Mixed Answer Types}, author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong}, year={2021}, eprint={2110.08175}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "widget": [{"text": "Robert Boyle \\\\n In the late 17th century, Robert Boyle proved that air is necessary for combustion."}]}
Salesforce/mixqg-large
null
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2110.08175", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2110.08175" ]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# MixQG (large-sized model) MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository. ### How to use Using Huggingface pipeline abstraction: Using the pre-trained model directly:
[ "# MixQG (large-sized model)\n\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\nUsing the pre-trained model directly:" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-2110.08175 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# MixQG (large-sized model)\n\nMixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper MixQG: Neural Question Generation with Mixed Answer Types and the associated code is released in this repository.", "### How to use\nUsing Huggingface pipeline abstraction:\n\nUsing the pre-trained model directly:" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Salma-2/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
object-detection
keras
# YOLOv4 YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in [this paper](https://arxiv.org/abs/2004.10934), that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on [Github](https://github.com/AlexeyAB/darknet). This YOLOv4 library, inspired by previous YOLOv3 implementations here: * [Yolov3 tensorflow](https://github.com/YunYang1994/tensorflow-yolov3) * [Yolov3 tf2](https://github.com/zzh8829/yolov3-tf2)uses Tensorflow 2.0 and is available on this [Github](https://github.com/hunglc007/tensorflow-yolov4-tflite). ### Limitations and biases Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences. The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology. ### How to use YOLOv4tflite You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own! ```bash # install git lfs git lfs install # if presented with the error "git: 'lfs' is not a git command. See 'git --help'", try running these linux commands: curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash # change directory to base cd .. # install git-lfs sudo apt-get install git-lfs # for message "Git LFS initialized" git lfs install # change directory to yolo_v4_tflite cd ./yolo_v4_tflite # clone this repo into your notebook git clone https://huggingface.co/SamMorgan/yolo_v4_tflite # Run demo tensor flow for an example of how this model works python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg --output ./test.jpg # Try with your own image python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image <insert path to image of choice> --output <insert path to output location of choice> ``` ### Evaluate on COCO 2017 Dataset ```bash # run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset # preprocess coco dataset cd data mkdir dataset cd .. cd scripts python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl python coco_annotation.py --coco_path ./coco cd .. # evaluate yolov4 model python evaluate.py --weights ./data/yolov4.weights cd mAP/extra python remove_space.py cd .. python main.py --output results_yolov4_tf ``` #### mAP50 on COCO 2017 Dataset | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 | 55.43 | 52.32 | | | YoloV4 | 61.96 | 57.33 | | ### Benchmark ```bash python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights ``` #### TensorRT performance | YoloV4 416 images/s | FP32 | FP16 | INT8 | |---------------------|----------|----------|----------| | Batch size 1 | 55 | 116 | | | Batch size 8 | 70 | 152 | | #### Tesla P100 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 40.6 | 49.4 | 61.3 | | YoloV4 FPS | 33.4 | 41.7 | 50.0 | #### Tesla K80 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 10.8 | 12.9 | 17.6 | | YoloV4 FPS | 9.6 | 11.7 | 16.0 | #### Tesla T4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 27.6 | 32.3 | 45.1 | | YoloV4 FPS | 24.0 | 30.3 | 40.1 | #### Tesla P4 | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | 20.2 | 24.2 | 31.2 | | YoloV4 FPS | 16.2 | 20.2 | 26.5 | #### Macbook Pro 15 (2.3GHz i7) | Detection | 512x512 | 416x416 | 320x320 | |-------------|---------|---------|---------| | YoloV3 FPS | | | | | YoloV4 FPS | | | | ### Traning your own model ```bash # Prepare your dataset # If you want to train from scratch: In config.py set FISRT_STAGE_EPOCHS=0 # Run script: python train.py # Transfer learning: python train.py --weights ./data/yolov4.weights ``` The training performance is not fully reproduced yet, so I recommended to use Alex's [Darknet](https://github.com/AlexeyAB/darknet) to train your own data, then convert the .weights to tensorflow or tflite. ### References * YOLOv4: Optimal Speed and Accuracy of Object Detection [YOLOv4](https://arxiv.org/abs/2004.10934). * [darknet](https://github.com/AlexeyAB/darknet)
{"language": "en", "license": "mit", "tags": ["object detection", "computer vision", "darknet", "yolo"], "datasets": ["coco", "imagenette"], "thumbnail": "https://github.com/hunglc007/tensorflow-yolov4-tflite", "pipeline_tag": "object-detection"}
SamMorgan/yolo_v4_tflite
null
[ "keras", "tflite", "object detection", "computer vision", "darknet", "yolo", "object-detection", "en", "dataset:coco", "dataset:imagenette", "arxiv:2004.10934", "license:mit", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2004.10934" ]
[ "en" ]
TAGS #keras #tflite #object detection #computer vision #darknet #yolo #object-detection #en #dataset-coco #dataset-imagenette #arxiv-2004.10934 #license-mit #region-us
YOLOv4 ====== YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in this paper, that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on Github. This YOLOv4 library, inspired by previous YOLOv3 implementations here: * Yolov3 tensorflow * Yolov3 tf2uses Tensorflow 2.0 and is available on this Github. ### Limitations and biases Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences. The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology. ### How to use YOLOv4tflite You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own! ### Evaluate on COCO 2017 Dataset #### mAP50 on COCO 2017 Dataset ### Benchmark #### TensorRT performance #### Tesla P100 #### Tesla K80 #### Tesla T4 #### Tesla P4 #### Macbook Pro 15 (2.3GHz i7) ### Traning your own model The training performance is not fully reproduced yet, so I recommended to use Alex's Darknet to train your own data, then convert the .weights to tensorflow or tflite. ### References * YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4. * darknet
[ "### Limitations and biases\n\n\nObject-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences.\n\n\nThe COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology.", "### How to use YOLOv4tflite\n\n\nYou can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own!", "### Evaluate on COCO 2017 Dataset", "#### mAP50 on COCO 2017 Dataset", "### Benchmark", "#### TensorRT performance", "#### Tesla P100", "#### Tesla K80", "#### Tesla T4", "#### Tesla P4", "#### Macbook Pro 15 (2.3GHz i7)", "### Traning your own model\n\n\nThe training performance is not fully reproduced yet, so I recommended to use Alex's Darknet to train your own data, then convert the .weights to tensorflow or tflite.", "### References\n\n\n* YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.\n* darknet" ]
[ "TAGS\n#keras #tflite #object detection #computer vision #darknet #yolo #object-detection #en #dataset-coco #dataset-imagenette #arxiv-2004.10934 #license-mit #region-us \n", "### Limitations and biases\n\n\nObject-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences.\n\n\nThe COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology.", "### How to use YOLOv4tflite\n\n\nYou can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own!", "### Evaluate on COCO 2017 Dataset", "#### mAP50 on COCO 2017 Dataset", "### Benchmark", "#### TensorRT performance", "#### Tesla P100", "#### Tesla K80", "#### Tesla T4", "#### Tesla P4", "#### Macbook Pro 15 (2.3GHz i7)", "### Traning your own model\n\n\nThe training performance is not fully reproduced yet, so I recommended to use Alex's Darknet to train your own data, then convert the .weights to tensorflow or tflite.", "### References\n\n\n* YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.\n* darknet" ]
text-generation
transformers
# Peter from Your Boyfriend Game.
{"tags": ["conversational"]}
Sammigooof/Peterbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Peter from Your Boyfriend Game.
[ "# Peter from Your Boyfriend Game." ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Peter from Your Boyfriend Game." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.5185 - Bleu: 1.2541 - Gen Len: 17.395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 3.413 | 1.0 | 6250 | 3.5378 | 1.2291 | 17.4057 | | 3.342 | 2.0 | 12500 | 3.5185 | 1.2541 | 17.395 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt19"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-fi-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt19", "type": "wmt19", "args": "fi-en"}, "metrics": [{"type": "bleu", "value": 1.2541, "name": "Bleu"}]}]}]}
Sancha/t5-small-finetuned-fi-to-en
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt19", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-fi-to-en =========================== This model is a fine-tuned version of t5-small on the wmt19 dataset. It achieves the following results on the evaluation set: * Loss: 3.5185 * Bleu: 1.2541 * Gen Len: 17.395 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lar-xlsr-es-col This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0947 - Wer: 0.1884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8446 | 8.51 | 400 | 2.8174 | 0.9854 | | 0.5146 | 17.02 | 800 | 0.1022 | 0.2020 | | 0.0706 | 25.53 | 1200 | 0.0947 | 0.1884 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-lar-xlsr-es-col", "results": []}]}
Santiagot1105/wav2vec2-lar-xlsr-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-lar-xlsr-es-col ======================== This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-spanish on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0947 * Wer: 0.1884 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.1+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lar-xlsr-finetune-es-col This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1669 - Wer: 0.2595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1108 | 8.51 | 400 | 0.5936 | 0.6085 | | 0.3015 | 17.02 | 800 | 0.2071 | 0.2941 | | 0.0989 | 25.53 | 1200 | 0.1669 | 0.2595 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-lar-xlsr-finetune-es-col", "results": []}]}
Santiagot1105/wav2vec2-lar-xlsr-finetune-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-lar-xlsr-finetune-es-col ================================= This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1669 * Wer: 0.2595 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.1+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-finetune-es-col This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6514 - Wer: 0.9874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9709 | 3.25 | 400 | 2.9673 | 1.0 | | 2.9488 | 6.5 | 800 | 2.9075 | 0.9973 | | 2.907 | 9.76 | 1200 | 2.8772 | 0.9688 | | 2.886 | 13.01 | 1600 | 2.8245 | 0.9484 | | 2.8043 | 16.26 | 2000 | 2.7134 | 0.9874 | | 2.7288 | 19.51 | 2400 | 2.6750 | 0.9874 | | 2.7072 | 22.76 | 2800 | 2.6651 | 0.9874 | | 2.6892 | 26.02 | 3200 | 2.6573 | 0.9874 | | 2.683 | 29.27 | 3600 | 2.6514 | 0.9874 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-finetune-es-col", "results": []}]}
Santiagot1105/wav2vec2-large-xlsr-finetune-es-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xlsr-finetune-es-col =================================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.6514 * Wer: 0.9874 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.1+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-finetune-spanish-col This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7105 - Wer: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.2829 | 3.25 | 400 | 2.9632 | 1.0 | | 2.9664 | 6.5 | 800 | 2.8494 | 1.0542 | | 2.8353 | 9.76 | 1200 | 2.8352 | 1.0101 | | 2.7863 | 13.01 | 1600 | 2.7421 | 0.9837 | | 2.762 | 16.26 | 2000 | 2.7254 | 0.9861 | | 2.7483 | 19.51 | 2400 | 2.7228 | 0.9874 | | 2.7482 | 22.76 | 2800 | 2.7228 | 0.9999 | | 2.7373 | 26.02 | 3200 | 2.7163 | 0.9824 | | 2.7328 | 29.27 | 3600 | 2.7105 | 0.9824 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-finetune-spanish-col", "results": []}]}
Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xlsr-finetune-spanish-col ======================================== This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-spanish on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.7105 * Wer: 0.9824 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.1+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
text-generation
transformers
#Ally DialoGPT Model
{"tags": ["conversational"]}
SarahhhUwU/DialoGPT-small-ally
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Ally DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Sarahliu186/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-base-timit-demo-colab This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
[ "# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
null
null
<h1>Hugging Face model</h1>
{}
Sarim24/TransformerModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
<h1>Hugging Face model</h1>
[]
[ "TAGS\n#region-us \n" ]
text-generation
null
# Rick DialoGPT Model
{"tags": ["conversational"]}
Sarumomo/DialoGPT-small-test
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #conversational #region-us
# Rick DialoGPT Model
[ "# Rick DialoGPT Model" ]
[ "TAGS\n#conversational #region-us \n", "# Rick DialoGPT Model" ]
null
null
# [WIP] Albert Bengali - dev version ## Model description For the moment, only the tokenizer is available. The tokenizer is based on [SentencePiece](https://github.com/google/sentencepiece) with Unigram language model segmentation algorithm. Taking into account certain characteristics of the language, we chose that: - the tokenizer passes in lower case all the texts because the Bengali language is a monocameral scrip (no difference between capital and lower case); - the sentence pieces can't go beyond the boundary of a word because the words are spaced by white spaces in the Bengali language. ## Intended uses & limitations This tokenizer is adapted to the Bengali language. You can use it to pre-train an Albert model on the Bengali language. #### How to use To tokenize: ```python from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('SaulLu/albert-bn-dev') text = "পোকেমন জাপানী ভিডিও গেম কোম্পানি নিনটেন্ডো কর্তৃক প্রকাশিত একটি মিডিয়া ফ্র‍্যাঞ্চাইজি।" encoded_input = tokenizer(text, return_tensors='pt') ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data The tokenizer was trained on a random subset of 4M sentences of Bengali Oscar and Bengali Wikipedia. ## Training procedure ### Tokenizer The tokenizer was trained with the [SentencePiece](https://github.com/google/sentencepiece) on 8 x Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz with 16GB RAM and 36GB SWAP. ``` import sentencepiece as spm config = { "input": "./dataset/oscar_bn.txt,./dataset/wikipedia_bn.txt", "input_format": "text", "model_type": "unigram", "vocab_size": 32000, "self_test_sample_size": 0, "character_coverage": 0.9995, "shuffle_input_sentence": true, "seed_sentencepiece_size": 1000000, "shrinking_factor": 0.75, "num_threads": 8, "num_sub_iterations": 2, "max_sentencepiece_length": 16, "max_sentence_length": 4192, "split_by_unicode_script": true, "split_by_number": true, "split_digits": true, "control_symbols": "[MASK]", "byte_fallback": false, "vocabulary_output_piece_score": true, "normalization_rule_name": "nmt_nfkc_cf", "add_dummy_prefix": true, "remove_extra_whitespaces": true, "hard_vocab_limit": true, "unk_id": 1, "bos_id": 2, "eos_id": 3, "pad_id": 0, "bos_piece": "[CLS]", "eos_piece": "[SEP]", "train_extremely_large_corpus": true, "split_by_whitespace": true, "model_prefix": "./spiece", "input_sentence_size": 4000000, "user_defined_symbols": "(,),-,.,–,£,।", } spm.SentencePieceTrainer.train(**config) ``` <!-- ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ``` -->
{"language": ["bn"], "license": "apache-2.0", "tags": [], "datasets": ["oscar", "wikipedia"], "metrics": []}
SaulLu/albert-bn-dev
null
[ "bn", "dataset:oscar", "dataset:wikipedia", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "bn" ]
TAGS #bn #dataset-oscar #dataset-wikipedia #license-apache-2.0 #region-us
# [WIP] Albert Bengali - dev version ## Model description For the moment, only the tokenizer is available. The tokenizer is based on SentencePiece with Unigram language model segmentation algorithm. Taking into account certain characteristics of the language, we chose that: - the tokenizer passes in lower case all the texts because the Bengali language is a monocameral scrip (no difference between capital and lower case); - the sentence pieces can't go beyond the boundary of a word because the words are spaced by white spaces in the Bengali language. ## Intended uses & limitations This tokenizer is adapted to the Bengali language. You can use it to pre-train an Albert model on the Bengali language. #### How to use To tokenize: #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data The tokenizer was trained on a random subset of 4M sentences of Bengali Oscar and Bengali Wikipedia. ## Training procedure ### Tokenizer The tokenizer was trained with the SentencePiece on 8 x Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz with 16GB RAM and 36GB SWAP.
[ "# [WIP] Albert Bengali - dev version", "## Model description\n\nFor the moment, only the tokenizer is available. The tokenizer is based on SentencePiece with Unigram language model segmentation algorithm.\n\nTaking into account certain characteristics of the language, we chose that:\n\n- the tokenizer passes in lower case all the texts because the Bengali language is a monocameral scrip (no difference between capital and lower case);\n- the sentence pieces can't go beyond the boundary of a word because the words are spaced by white spaces in the Bengali language.", "## Intended uses & limitations\n\nThis tokenizer is adapted to the Bengali language. You can use it to pre-train an Albert model on the Bengali language.", "#### How to use\n\nTo tokenize:", "#### Limitations and bias\n\nProvide examples of latent issues and potential remediations.", "## Training data\n\nThe tokenizer was trained on a random subset of 4M sentences of Bengali Oscar and Bengali Wikipedia.", "## Training procedure", "### Tokenizer\n\nThe tokenizer was trained with the SentencePiece on 8 x Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz with 16GB RAM and 36GB SWAP." ]
[ "TAGS\n#bn #dataset-oscar #dataset-wikipedia #license-apache-2.0 #region-us \n", "# [WIP] Albert Bengali - dev version", "## Model description\n\nFor the moment, only the tokenizer is available. The tokenizer is based on SentencePiece with Unigram language model segmentation algorithm.\n\nTaking into account certain characteristics of the language, we chose that:\n\n- the tokenizer passes in lower case all the texts because the Bengali language is a monocameral scrip (no difference between capital and lower case);\n- the sentence pieces can't go beyond the boundary of a word because the words are spaced by white spaces in the Bengali language.", "## Intended uses & limitations\n\nThis tokenizer is adapted to the Bengali language. You can use it to pre-train an Albert model on the Bengali language.", "#### How to use\n\nTo tokenize:", "#### Limitations and bias\n\nProvide examples of latent issues and potential remediations.", "## Training data\n\nThe tokenizer was trained on a random subset of 4M sentences of Bengali Oscar and Bengali Wikipedia.", "## Training procedure", "### Tokenizer\n\nThe tokenizer was trained with the SentencePiece on 8 x Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz with 16GB RAM and 36GB SWAP." ]
zero-shot-image-classification
transformers
# Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer. ### Model Version Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50. *This port does not include the ResNet model.* Please see the paper linked below for further details about their specification. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ### Use with Transformers ```python3 from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
{"tags": ["vision"]}
SaulLu/clip-vit-base-patch32
null
[ "transformers", "pytorch", "tf", "jax", "clip", "zero-shot-image-classification", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2103.00020", "1908.04913" ]
[]
TAGS #transformers #pytorch #tf #jax #clip #zero-shot-image-classification #vision #arxiv-2103.00020 #arxiv-1908.04913 #endpoints_compatible #region-us
# Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found here. ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer. ### Model Version Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50. *This port does not include the ResNet model.* Please see the paper linked below for further details about their specification. ### Documents - Blog Post - CLIP Paper ### Use with Transformers ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use this Google Form
[ "# Model Card: CLIP\n\nDisclaimer: The model card is taken and modified from the official CLIP repository, it can be found here.", "## Model Details\n\nThe CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.", "### Model Date\n\nJanuary 2021", "### Model Type\n\nThe base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.", "### Model Version\n\nInitially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.\n\n*This port does not include the ResNet model.*\n\nPlease see the paper linked below for further details about their specification.", "### Documents\n\n- Blog Post\n- CLIP Paper", "### Use with Transformers", "## Model Use", "### Intended Use\n\nThe model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.", "#### Primary intended uses\n\nThe primary intended users of these models are AI researchers.\n\nWe primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.", "### Out-of-Scope Use Cases\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.", "## Data\n\nThe model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.", "### Data Mission Statement\n\nOur goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.", "## Performance and Limitations", "### Performance\n\nWe have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:\n\n- Food101\n- CIFAR10 \n- CIFAR100 \n- Birdsnap\n- SUN397\n- Stanford Cars\n- FGVC Aircraft\n- VOC2007\n- DTD\n- Oxford-IIIT Pet dataset\n- Caltech101\n- Flowers102\n- MNIST \n- SVHN \n- IIIT5K \n- Hateful Memes \n- SST-2\n- UCF101\n- Kinetics700\n- Country211\n- CLEVR Counting\n- KITTI Distance\n- STL-10\n- RareAct\n- Flickr30\n- MSCOCO\n- ImageNet\n- ImageNet-A\n- ImageNet-R\n- ImageNet Sketch\n- ObjectNet (ImageNet Overlap)\n- Youtube-BB\n- ImageNet-Vid", "## Limitations\n\nCLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.", "### Bias and Fairness\n\nWe find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).\n\nWe also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.", "## Feedback", "### Where to send questions or comments about the model\n\nPlease use this Google Form" ]
[ "TAGS\n#transformers #pytorch #tf #jax #clip #zero-shot-image-classification #vision #arxiv-2103.00020 #arxiv-1908.04913 #endpoints_compatible #region-us \n", "# Model Card: CLIP\n\nDisclaimer: The model card is taken and modified from the official CLIP repository, it can be found here.", "## Model Details\n\nThe CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.", "### Model Date\n\nJanuary 2021", "### Model Type\n\nThe base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.", "### Model Version\n\nInitially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.\n\n*This port does not include the ResNet model.*\n\nPlease see the paper linked below for further details about their specification.", "### Documents\n\n- Blog Post\n- CLIP Paper", "### Use with Transformers", "## Model Use", "### Intended Use\n\nThe model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.", "#### Primary intended uses\n\nThe primary intended users of these models are AI researchers.\n\nWe primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.", "### Out-of-Scope Use Cases\n\nAny deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. \n\nCertain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.\n\nSince the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.", "## Data\n\nThe model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as YFCC100M. A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.", "### Data Mission Statement\n\nOur goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.", "## Performance and Limitations", "### Performance\n\nWe have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:\n\n- Food101\n- CIFAR10 \n- CIFAR100 \n- Birdsnap\n- SUN397\n- Stanford Cars\n- FGVC Aircraft\n- VOC2007\n- DTD\n- Oxford-IIIT Pet dataset\n- Caltech101\n- Flowers102\n- MNIST \n- SVHN \n- IIIT5K \n- Hateful Memes \n- SST-2\n- UCF101\n- Kinetics700\n- Country211\n- CLEVR Counting\n- KITTI Distance\n- STL-10\n- RareAct\n- Flickr30\n- MSCOCO\n- ImageNet\n- ImageNet-A\n- ImageNet-R\n- ImageNet Sketch\n- ObjectNet (ImageNet Overlap)\n- Youtube-BB\n- ImageNet-Vid", "## Limitations\n\nCLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.", "### Bias and Fairness\n\nWe find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from Fairface into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).\n\nWe also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.", "## Feedback", "### Where to send questions or comments about the model\n\nPlease use this Google Form" ]
text2text-generation
transformers
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in [this repository](https://github.com/salesforce/CodeT5). Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, [nielsr](https://huggingface.co/nielsr)). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the [model hub](https://huggingface.co/models?search=salesforce/codet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-small') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-small') text = "def greet(user): print(f'hello <extra_id_0>!')" input_ids = tokenizer(text, return_tensors="pt").input_ids # simply generate a single sequence generated_ids = model.generate(input_ids, max_length=10) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints "user: {user.name}" ``` ## Training data The CodeT5 model was pretrained on CodeSearchNet [Husain et al., 2019](https://arxiv.org/abs/1909.09436). Additionally, the authors collected two datasets of C/CSharp from [BigQuery1](https://console.cloud.google.com/marketplace/details/github/github-repos) to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info ```bibtex @misc{wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang and Weishi Wang and Shafiq Joty and Steven C. H. Hoi}, year={2021}, eprint={2109.00859}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"license": "apache-2.0", "tags": ["codet5"], "datasets": ["code_search_net"], "inference": false}
SaulLu/cotet5_small_fix
null
[ "transformers", "pytorch", "t5", "text2text-generation", "codet5", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.00859", "1909.09436" ]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
# CodeT5 (small-sized model) Pre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. Disclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr). ## Model description From the abstract: "We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code." ## Intended uses & limitations This repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as: * code summarization * code generation * code translation * code refinement * code defect detection * code clone detection. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ## Training data The CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining. ## Training procedure ### Preprocessing This model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository. ## Evaluation results For evaluation results on several downstream benchmarks, we refer to the paper. ### BibTeX entry and citation info
[ "# CodeT5 (small-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #codet5 #dataset-code_search_net #arxiv-2109.00859 #arxiv-1909.09436 #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n", "# CodeT5 (small-sized model) \n\nPre-trained CodeT5 model. It was introduced in the paper CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models\nfor Code Understanding and Generation by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi and first released in this repository. \n\nDisclaimer: The team releasing CodeT5 did not write a model card for this model so this model card has been written by the Hugging Face team (more specifically, nielsr).", "## Model description\n\nFrom the abstract:\n\n\"We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code.\"", "## Intended uses & limitations\n\nThis repository contains the pre-trained model only, so you can use this model for masked span prediction, as shown in the code example below. However, the main use of this model is to fine-tune it for a downstream task of interest, such as:\n* code summarization\n* code generation\n* code translation\n* code refinement\n* code defect detection\n* code clone detection. \n\nSee the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "## Training data\n\nThe CodeT5 model was pretrained on CodeSearchNet Husain et al., 2019. Additionally, the authors collected two datasets of C/CSharp from BigQuery1 to ensure that all downstream tasks have overlapped programming languages with the pre-training data. In total, around 8.35 million instances are used for pretraining.", "## Training procedure", "### Preprocessing\n\nThis model uses a code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer, with the files from this repository.", "## Evaluation results\n\nFor evaluation results on several downstream benchmarks, we refer to the paper.", "### BibTeX entry and citation info" ]
null
transformers
# MarkupLM **Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)** ## Introduction MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
{}
SaulLu/markuplm-base
null
[ "transformers", "pytorch", "markuplm", "arxiv:2110.08518", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2110.08518" ]
[]
TAGS #transformers #pytorch #markuplm #arxiv-2110.08518 #endpoints_compatible #region-us
# MarkupLM Multimodal (text +markup language) pre-training for Document AI ## Introduction MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
[ "# MarkupLM\n\nMultimodal (text +markup language) pre-training for Document AI", "## Introduction\n\nMarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:\n\nMarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei" ]
[ "TAGS\n#transformers #pytorch #markuplm #arxiv-2110.08518 #endpoints_compatible #region-us \n", "# MarkupLM\n\nMultimodal (text +markup language) pre-training for Document AI", "## Introduction\n\nMarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:\n\nMarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei" ]
token-classification
transformers
# sahajBERT Named Entity Recognition ## Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann). Named Entities predicted by the model: | Label id | Label | |:--------:|:----:| |0 |O| |1 |B-PER| |2 |I-PER| |3 |B-ORG| |4 |I-ORG| |5 |B-LOC| |6 |I-LOC| ## Intended uses & limitations #### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER") # Initialize model model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER") # Initialize pipeline pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### Limitations and bias <!-- Provide examples of latent issues and potential remediations. --> WIP ## Training data The model was initialized it with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali of [WikiANN ](https://huggingface.co/datasets/wikiann) ## Training procedure Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` --> ## Eval results loss: 0.11714419722557068 accuracy: 0.9772286821705426 precision: 0.9585365853658536 recall: 0.9651277013752456 f1 : 0.9618208516886931 ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
{"language": "bn", "license": "apache-2.0", "tags": ["collaborative", "bengali", "NER"], "datasets": "xtreme", "metrics": ["Loss", "Accuracy", "Precision", "Recall"]}
SaulLu/recreate-history
null
[ "transformers", "pytorch", "albert", "token-classification", "collaborative", "bengali", "NER", "bn", "dataset:xtreme", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "bn" ]
TAGS #transformers #pytorch #albert #token-classification #collaborative #bengali #NER #bn #dataset-xtreme #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
sahajBERT Named Entity Recognition ================================== Model description ----------------- sahajBERT fine-tuned for NER using the bengali split of WikiANN . Named Entities predicted by the model: Intended uses & limitations --------------------------- #### How to use You can use this model directly with a pipeline for masked language modeling: #### Limitations and bias WIP Training data ------------- The model was initialized it with pre-trained weights of sahajBERT at step 19519 and trained on the bengali of WikiANN Training procedure ------------------ Coming soon! Eval results ------------ loss: 0.11714419722557068 accuracy: 0.9772286821705426 precision: 0.9585365853658536 recall: 0.9651277013752456 f1 : 0.9618208516886931 ### BibTeX entry and citation info Coming soon!
[ "#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:", "#### Limitations and bias\n\n\nWIP\n\n\nTraining data\n-------------\n\n\nThe model was initialized it with pre-trained weights of sahajBERT at step 19519 and trained on the bengali of WikiANN\n\n\nTraining procedure\n------------------\n\n\nComing soon!\n\n\nEval results\n------------\n\n\nloss: 0.11714419722557068\n\n\naccuracy: 0.9772286821705426\n\n\nprecision: 0.9585365853658536\n\n\nrecall: 0.9651277013752456\n\n\nf1 : 0.9618208516886931", "### BibTeX entry and citation info\n\n\nComing soon!" ]
[ "TAGS\n#transformers #pytorch #albert #token-classification #collaborative #bengali #NER #bn #dataset-xtreme #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:", "#### Limitations and bias\n\n\nWIP\n\n\nTraining data\n-------------\n\n\nThe model was initialized it with pre-trained weights of sahajBERT at step 19519 and trained on the bengali of WikiANN\n\n\nTraining procedure\n------------------\n\n\nComing soon!\n\n\nEval results\n------------\n\n\nloss: 0.11714419722557068\n\n\naccuracy: 0.9772286821705426\n\n\nprecision: 0.9585365853658536\n\n\nrecall: 0.9651277013752456\n\n\nf1 : 0.9618208516886931", "### BibTeX entry and citation info\n\n\nComing soon!" ]
feature-extraction
transformers
# HTLM Pretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps Paper: [HTLM: Hyper-Text Pre-Training and Prompting of Language Models](https://arxiv.org/abs/2107.06955) Authors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Abstract We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ## Usage For the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task. ``` from transformers import BartTokenizer, BartForConditionalGeneration TXT = "My friends are <mask> but they eat too many carbs." model_name = "SaulLu/test-add-new-model" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] logits = model(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) tokenizer.decode(predictions).split() ```
{}
SaulLu/test-add-new-model
null
[ "transformers", "pytorch", "bart", "feature-extraction", "arxiv:2107.06955", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2107.06955" ]
[]
TAGS #transformers #pytorch #bart #feature-extraction #arxiv-2107.06955 #endpoints_compatible #has_space #region-us
# HTLM Pretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps Paper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models Authors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Abstract We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ## Usage For the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task.
[ "# HTLM\n\nPretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps\n\nPaper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models\n\nAuthors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Abstract\n\nWe introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research.", "## Usage\n\nFor the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task." ]
[ "TAGS\n#transformers #pytorch #bart #feature-extraction #arxiv-2107.06955 #endpoints_compatible #has_space #region-us \n", "# HTLM\n\nPretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps\n\nPaper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models\n\nAuthors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Abstract\n\nWe introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research.", "## Usage\n\nFor the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task." ]
null
transformers
# sahajBERT News Category Classification ## Model description You can embed local or remote images using `![](...)` ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure ### Collaborative training procedure [here](https://huggingface.co/albertvillanova) ### Preprocessing, hardware used, hyperparameters... ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
{"language": [], "tags": [], "datasets": [], "metrics": []}
SaulLu/test-model
null
[ "transformers", "pytorch", "albert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #pretraining #endpoints_compatible #region-us
# sahajBERT News Category Classification ## Model description You can embed local or remote images using '![](...)' ## Intended uses & limitations #### How to use #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure ### Collaborative training procedure here ### Preprocessing, hardware used, hyperparameters... ## Eval results ### BibTeX entry and citation info
[ "# sahajBERT News Category Classification", "## Model description\n\nYou can embed local or remote images using '![](...)'", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\nProvide examples of latent issues and potential remediations.", "## Training data\n\nDescribe the data you used to train the model.\nIf you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.", "## Training procedure", "### Collaborative training procedure\n\nhere", "### \nPreprocessing, hardware used, hyperparameters...", "## Eval results", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #albert #pretraining #endpoints_compatible #region-us \n", "# sahajBERT News Category Classification", "## Model description\n\nYou can embed local or remote images using '![](...)'", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\nProvide examples of latent issues and potential remediations.", "## Training data\n\nDescribe the data you used to train the model.\nIf you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.", "## Training procedure", "### Collaborative training procedure\n\nhere", "### \nPreprocessing, hardware used, hyperparameters...", "## Eval results", "### BibTeX entry and citation info" ]
null
null
test readme test 2 test 3 test 4 test 5 test 6 test 7 test 8 test 9 test 10 test 11
{}
SaulLu/test-push-to-hub
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
test readme test 2 test 3 test 4 test 5 test 6 test 7 test 8 test 9 test 10 test 11
[]
[ "TAGS\n#region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-albert-base
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-albert-large
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-bert-base-uncased
null
[ "transformers", "pytorch", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-bert-large-uncased
null
[ "transformers", "pytorch", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-roberta-base
null
[ "transformers", "pytorch", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-roberta-large
null
[ "transformers", "pytorch", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-xlm-roberta-base
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL FineTuning ==========
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-albert-base
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #albert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-bert-base-uncased
null
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-distilbert-base-uncased
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #distilbert #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-base
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us \n" ]
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-large
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us \n" ]
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-small
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #electra #pretraining #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #endpoints_compatible #region-us \n" ]
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-roberta-base
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us
![](URL PreTraining ===========
[]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #kaggle #dataset-Commonlit-Readibility #license-cc0-1.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
question-answering
null
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
{"language": "multilingual", "license": "cc0-1.0", "tags": ["kaggle", "rembert", "pytorch", "question-answering"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true", "inference": false}
SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii
null
[ "kaggle", "rembert", "pytorch", "question-answering", "multilingual", "dataset:Commonlit-Readibility", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #kaggle #rembert #pytorch #question-answering #multilingual #dataset-Commonlit-Readibility #license-cc0-1.0 #region-us
![]() This dataset contains the google/rembert model weights according to my team's experimentation strategy during the chaii - Hindi and Tamil Question Answering competition. They are listed below with their corresponding public LB score:-
[]
[ "TAGS\n#kaggle #rembert #pytorch #question-answering #multilingual #dataset-Commonlit-Readibility #license-cc0-1.0 #region-us \n" ]