pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0611 - Precision: 0.9272 - Recall: 0.9382 - F1: 0.9327 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2432 | 1.0 | 878 | 0.0689 | 0.9132 | 0.9203 | 0.9168 | 0.9813 | | 0.0507 | 2.0 | 1756 | 0.0608 | 0.9208 | 0.9346 | 0.9276 | 0.9835 | | 0.03 | 3.0 | 2634 | 0.0611 | 0.9272 | 0.9382 | 0.9327 | 0.9843 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9843042559613643}}]}]}
codingJacob/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0611 * Precision: 0.9272 * Recall: 0.9382 * F1: 0.9327 * Accuracy: 0.9843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.1 * Pytorch 1.9.0+cu102 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0605 - Precision: 0.9251 - Recall: 0.9357 - F1: 0.9304 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2402 | 1.0 | 878 | 0.0694 | 0.9168 | 0.9215 | 0.9191 | 0.9814 | | 0.051 | 2.0 | 1756 | 0.0595 | 0.9249 | 0.9330 | 0.9289 | 0.9833 | | 0.0302 | 3.0 | 2634 | 0.0605 | 0.9251 | 0.9357 | 0.9304 | 0.9837 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9837323462595516}}]}]}
cogito233/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0605 * Precision: 0.9251 * Recall: 0.9357 * F1: 0.9304 * Accuracy: 0.9837 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
feature-extraction
transformers
# LaBSE for English and Russian This is a truncated version of [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE), which is, in turn, a port of [LaBSE](https://tfhub.dev/google/LaBSE/1) by Google. The current model has only English and Russian tokens left in the vocabulary. Thus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings. To get the sentence embeddings, you can use the following code: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/LaBSE-en-ru") model = AutoModel.from_pretrained("cointegrated/LaBSE-en-ru") sentences = ["Hello World", "Привет Мир"] encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = model_output.pooler_output embeddings = torch.nn.functional.normalize(embeddings) print(embeddings) ``` The model has been truncated in [this notebook](https://colab.research.google.com/drive/1dnPRn0-ugj3vZgSpyCC9sgslM2SuSfHy?usp=sharing). You can adapt it for other languages (like [EIStakovskii/LaBSE-fr-de](https://huggingface.co/EIStakovskii/LaBSE-fr-de)), models or datasets. ## Reference: Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. [Language-agnostic BERT Sentence Embedding](https://arxiv.org/abs/2007.01852). July 2020 License: [https://tfhub.dev/google/LaBSE/1](https://tfhub.dev/google/LaBSE/1)
{"language": ["ru", "en"], "tags": ["feature-extraction", "embeddings", "sentence-similarity"]}
cointegrated/LaBSE-en-ru
null
[ "transformers", "pytorch", "tf", "safetensors", "bert", "pretraining", "feature-extraction", "embeddings", "sentence-similarity", "ru", "en", "arxiv:2007.01852", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2007.01852" ]
[ "ru", "en" ]
TAGS #transformers #pytorch #tf #safetensors #bert #pretraining #feature-extraction #embeddings #sentence-similarity #ru #en #arxiv-2007.01852 #endpoints_compatible #has_space #region-us
# LaBSE for English and Russian This is a truncated version of sentence-transformers/LaBSE, which is, in turn, a port of LaBSE by Google. The current model has only English and Russian tokens left in the vocabulary. Thus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings. To get the sentence embeddings, you can use the following code: The model has been truncated in this notebook. You can adapt it for other languages (like EIStakovskii/LaBSE-fr-de), models or datasets. ## Reference: Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020 License: URL
[ "# LaBSE for English and Russian\nThis is a truncated version of sentence-transformers/LaBSE, which is, in turn, a port of LaBSE by Google.\n\nThe current model has only English and Russian tokens left in the vocabulary.\nThus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings.\n \nTo get the sentence embeddings, you can use the following code:\n\n\nThe model has been truncated in this notebook.\nYou can adapt it for other languages (like EIStakovskii/LaBSE-fr-de), models or datasets.", "## Reference:\nFangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020\n\nLicense: URL" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #bert #pretraining #feature-extraction #embeddings #sentence-similarity #ru #en #arxiv-2007.01852 #endpoints_compatible #has_space #region-us \n", "# LaBSE for English and Russian\nThis is a truncated version of sentence-transformers/LaBSE, which is, in turn, a port of LaBSE by Google.\n\nThe current model has only English and Russian tokens left in the vocabulary.\nThus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings.\n \nTo get the sentence embeddings, you can use the following code:\n\n\nThe model has been truncated in this notebook.\nYou can adapt it for other languages (like EIStakovskii/LaBSE-fr-de), models or datasets.", "## Reference:\nFangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020\n\nLicense: URL" ]
text-classification
transformers
This is a RoBERTa-large classifier trained on the CoLA corpus [Warstadt et al., 2019](https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00290), which contains sentences paired with grammatical acceptability judgments. The model can be used to evaluate fluency of machine-generated English sentences, e.g. for evaluation of text style transfer. The model was trained in the paper [Krishna et al, 2020. Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700), and its original version is available at [their project page](http://style.cs.umass.edu). We converted this model from Fairseq to Transformers format. All credit goes to the authors of the original paper. ## Citation If you found this model useful and refer to it, please cite the original work: ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
{}
cointegrated/roberta-large-cola-krishna2020
null
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "arxiv:2010.05700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.05700" ]
[]
TAGS #transformers #pytorch #safetensors #roberta #text-classification #arxiv-2010.05700 #autotrain_compatible #endpoints_compatible #region-us
This is a RoBERTa-large classifier trained on the CoLA corpus Warstadt et al., 2019, which contains sentences paired with grammatical acceptability judgments. The model can be used to evaluate fluency of machine-generated English sentences, e.g. for evaluation of text style transfer. The model was trained in the paper Krishna et al, 2020. Reformulating Unsupervised Style Transfer as Paraphrase Generation, and its original version is available at their project page. We converted this model from Fairseq to Transformers format. All credit goes to the authors of the original paper. If you found this model useful and refer to it, please cite the original work:
[]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #arxiv-2010.05700 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
This is a version of paraphrase detector by DeepPavlov ([details in the documentation](http://docs.deeppavlov.ai/en/master/features/overview.html#ranking-model-docs)) ported to the `Transformers` format. All credit goes to the authors of DeepPavlov. The model has been trained on the dataset from http://paraphraser.ru/. It classifies texts as paraphrases (class 1) or non-paraphrases (class 0). ```python import torch from transformers import AutoModelForSequenceClassification, BertTokenizer model_name = 'cointegrated/rubert-base-cased-dp-paraphrase-detection' model = AutoModelForSequenceClassification.from_pretrained(model_name).cuda() tokenizer = BertTokenizer.from_pretrained(model_name) def compare_texts(text1, text2): batch = tokenizer(text1, text2, return_tensors='pt').to(model.device) with torch.inference_mode(): proba = torch.softmax(model(**batch).logits, -1).cpu().numpy() return proba[0] # p(non-paraphrase), p(paraphrase) print(compare_texts('Сегодня на улице хорошая погода', 'Сегодня на улице отвратительная погода')) # [0.7056226 0.2943774] print(compare_texts('Сегодня на улице хорошая погода', 'Отличная погодка сегодня выдалась')) # [0.16524374 0.8347562 ] ``` P.S. In the DeepPavlov repository, the tokenizer uses `max_seq_length=64`. This model, however, uses `model_max_length=512`. Therefore, the results on long texts may be inadequate.
{"language": ["ru"], "tags": ["sentence-similarity", "text-classification"], "datasets": ["merionum/ru_paraphraser"]}
cointegrated/rubert-base-cased-dp-paraphrase-detection
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "sentence-similarity", "ru", "dataset:merionum/ru_paraphraser", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #sentence-similarity #ru #dataset-merionum/ru_paraphraser #autotrain_compatible #endpoints_compatible #region-us
This is a version of paraphrase detector by DeepPavlov (details in the documentation) ported to the 'Transformers' format. All credit goes to the authors of DeepPavlov. The model has been trained on the dataset from URL It classifies texts as paraphrases (class 1) or non-paraphrases (class 0). P.S. In the DeepPavlov repository, the tokenizer uses 'max_seq_length=64'. This model, however, uses 'model_max_length=512'. Therefore, the results on long texts may be inadequate.
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sentence-similarity #ru #dataset-merionum/ru_paraphraser #autotrain_compatible #endpoints_compatible #region-us \n" ]
zero-shot-classification
transformers
# RuBERT for NLI (natural language inference) This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral. ## Usage How to run the model for NLI: ```python # !pip install transformers sentencepiece --quiet import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_checkpoint = 'cointegrated/rubert-base-cased-nli-threeway' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) if torch.cuda.is_available(): model.cuda() text1 = 'Сократ - человек, а все люди смертны.' text2 = 'Сократ никогда не умрёт.' with torch.inference_mode(): out = model(**tokenizer(text1, text2, return_tensors='pt').to(model.device)) proba = torch.softmax(out.logits, -1).cpu().numpy()[0] print({v: proba[k] for k, v in model.config.id2label.items()}) # {'entailment': 0.009525929, 'contradiction': 0.9332064, 'neutral': 0.05726764} ``` You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis: ```python def predict_zero_shot(text, label_texts, model, tokenizer, label='entailment', normalize=True): label_texts tokens = tokenizer([text] * len(label_texts), label_texts, truncation=True, return_tensors='pt', padding=True) with torch.inference_mode(): result = torch.softmax(model(**tokens.to(model.device)).logits, -1) proba = result[:, model.config.label2id[label]].cpu().numpy() if normalize: proba /= sum(proba) return proba classes = ['Я доволен', 'Я недоволен'] predict_zero_shot('Какая гадость эта ваша заливная рыба!', classes, model, tokenizer) # array([0.05609814, 0.9439019 ], dtype=float32) predict_zero_shot('Какая вкусная эта ваша заливная рыба!', classes, model, tokenizer) # array([0.9059292 , 0.09407079], dtype=float32) ``` Alternatively, you can use [Huggingface pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) for inference. ## Sources The model has been trained on a series of NLI datasets automatically translated to Russian from English. Most datasets were taken [from the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets): [JOCI](https://github.com/sheng-z/JOCI), [MNLI](https://cims.nyu.edu/~sbowman/multinli/), [MPE](https://aclanthology.org/I17-1011/), [SICK](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf), [SNLI](https://nlp.stanford.edu/projects/snli/). Some datasets obtained from the original sources: [ANLI](https://github.com/facebookresearch/anli), [NLI-style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [IMPPRES](https://github.com/facebookresearch/Imppres). ## Performance The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets: - [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli): a small BERT predicting entailment vs not_entailment - [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway): a base-sized BERT predicting entailment vs not_entailment - [threeway](https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway) (**this model**): a base-sized BERT predicting entailment vs contradiction vs neutral - [vicgalle-xlm](https://huggingface.co/vicgalle/xlm-roberta-large-xnli-anli): a large multilingual NLI model - [facebook-bart](https://huggingface.co/facebook/bart-large-mnli): a large multilingual NLI model |model |add_one_rte|anli_r1|anli_r2|anli_r3|copa|fever|help|iie |imppres|joci|mnli |monli|mpe |scitail|sick|snli|terra|total | |------------------------|-----------|-------|-------|-------|----|-----|----|-----|-------|----|-----|-----|----|-------|----|----|-----|------| |n_observations |387 |1000 |1000 |1200 |200 |20474|3355|31232|7661 |939 |19647|269 |1000|2126 |500 |9831|307 |101128| |tiny/entailment |0.77 |0.59 |0.52 |0.53 |0.53|0.90 |0.81|0.78 |0.93 |0.81|0.82 |0.91 |0.81|0.78 |0.93|0.95|0.67 |0.77 | |twoway/entailment |0.89 |0.73 |0.61 |0.62 |0.58|0.96 |0.92|0.87 |0.99 |0.90|0.90 |0.99 |0.91|0.96 |0.97|0.97|0.87 |0.86 | |threeway/entailment |0.91 |0.75 |0.61 |0.61 |0.57|0.96 |0.56|0.61 |0.99 |0.90|0.91 |0.67 |0.92|0.84 |0.98|0.98|0.90 |0.80 | |vicgalle-xlm/entailment |0.88 |0.79 |0.63 |0.66 |0.57|0.93 |0.56|0.62 |0.77 |0.80|0.90 |0.70 |0.83|0.84 |0.91|0.93|0.93 |0.78 | |facebook-bart/entailment|0.51 |0.41 |0.43 |0.47 |0.50|0.74 |0.55|0.57 |0.60 |0.63|0.70 |0.52 |0.56|0.68 |0.67|0.72|0.64 |0.58 | |threeway/contradiction | |0.71 |0.64 |0.61 | |0.97 | | |1.00 |0.77|0.92 | |0.89| |0.99|0.98| |0.85 | |threeway/neutral | |0.79 |0.70 |0.62 | |0.91 | | |0.99 |0.68|0.86 | |0.79| |0.96|0.96| |0.83 | For evaluation (and for training of the [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli) and [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) models), some extra datasets were used: [Add-one RTE](https://cs.brown.edu/people/epavlick/papers/ans.pdf), [CoPA](https://people.ict.usc.edu/~gordon/copa.html), [IIE](https://aclanthology.org/I17-1100), and [SCITAIL](https://allenai.org/data/scitail) taken from [the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets) and translatted, [HELP](https://github.com/verypluming/HELP) and [MoNLI](https://github.com/atticusg/MoNLI) taken from the original sources and translated, and Russian [TERRa](https://russiansuperglue.com/ru/tasks/task_info/TERRa).
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u042f \u0445\u043e\u0447\u0443 \u043f\u043e\u0435\u0445\u0430\u0442\u044c \u0432 \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044e", "candidate_labels": "\u0441\u043f\u043e\u0440\u0442,\u043f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u044f,\u043c\u0443\u0437\u044b\u043a\u0430,\u043a\u0438\u043d\u043e,\u043a\u043d\u0438\u0433\u0438,\u043d\u0430\u0443\u043a\u0430,\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430", "hypothesis_template": "\u0422\u0435\u043c\u0430 \u0442\u0435\u043a\u0441\u0442\u0430 - {}."}]}
cointegrated/rubert-base-cased-nli-threeway
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "rubert", "russian", "nli", "rte", "zero-shot-classification", "ru", "dataset:cointegrated/nli-rus-translated-v2021", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #has_space #region-us
RuBERT for NLI (natural language inference) =========================================== This is the DeepPavlov/rubert-base-cased fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral. Usage ----- How to run the model for NLI: You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis: Alternatively, you can use Huggingface pipelines for inference. Sources ------- The model has been trained on a series of NLI datasets automatically translated to Russian from English. Most datasets were taken from the repo of Felipe Salvatore: JOCI, MNLI, MPE, SICK, SNLI. Some datasets obtained from the original sources: ANLI, NLI-style FEVER, IMPPRES. Performance ----------- The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets: * tiny: a small BERT predicting entailment vs not\_entailment * twoway: a base-sized BERT predicting entailment vs not\_entailment * threeway (this model): a base-sized BERT predicting entailment vs contradiction vs neutral * vicgalle-xlm: a large multilingual NLI model * facebook-bart: a large multilingual NLI model For evaluation (and for training of the tiny and twoway models), some extra datasets were used: Add-one RTE, CoPA, IIE, and SCITAIL taken from the repo of Felipe Salvatore and translatted, HELP and MoNLI taken from the original sources and translated, and Russian TERRa.
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
zero-shot-classification
transformers
# RuBERT for NLI (natural language inference) This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u042f \u0445\u043e\u0447\u0443 \u043f\u043e\u0435\u0445\u0430\u0442\u044c \u0432 \u0410\u0432\u0441\u0442\u0440\u0430\u043b\u0438\u044e", "candidate_labels": "\u0441\u043f\u043e\u0440\u0442,\u043f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u044f,\u043c\u0443\u0437\u044b\u043a\u0430,\u043a\u0438\u043d\u043e,\u043a\u043d\u0438\u0433\u0438,\u043d\u0430\u0443\u043a\u0430,\u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0430", "hypothesis_template": "\u0422\u0435\u043c\u0430 \u0442\u0435\u043a\u0441\u0442\u0430 - {}."}]}
cointegrated/rubert-base-cased-nli-twoway
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "rubert", "russian", "nli", "rte", "zero-shot-classification", "ru", "dataset:cointegrated/nli-rus-translated-v2021", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #region-us
# RuBERT for NLI (natural language inference) This is the DeepPavlov/rubert-base-cased fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a similar model: URL
[ "# RuBERT for NLI (natural language inference)\n\nThis is the DeepPavlov/rubert-base-cased fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.\n\nFor more details, see the card for a similar model: URL" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #region-us \n", "# RuBERT for NLI (natural language inference)\n\nThis is the DeepPavlov/rubert-base-cased fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.\n\nFor more details, see the card for a similar model: URL" ]
token-classification
transformers
The model for https://github.com/Lesha17/Punctuation; all credits go to the owner of this repository.
{}
cointegrated/rubert-base-lesha17-punctuation
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
The model for URL all credits go to the owner of this repository.
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
zero-shot-classification
transformers
# RuBERT-tiny for NLI (natural language inference) This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
{"language": "ru", "tags": ["rubert", "russian", "nli", "rte", "zero-shot-classification"], "datasets": ["cointegrated/nli-rus-translated-v2021"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u0421\u0435\u0440\u0432\u0438\u0441 \u043e\u0442\u0441\u0442\u043e\u0439\u043d\u044b\u0439, \u043a\u043e\u0440\u043c\u0438\u043b\u0438 \u043d\u0435\u0432\u043a\u0443\u0441\u043d\u043e", "candidate_labels": "\u041c\u043d\u0435 \u043f\u043e\u043d\u0440\u0430\u0432\u0438\u043b\u043e\u0441\u044c, \u041c\u043d\u0435 \u043d\u0435 \u043f\u043e\u043d\u0440\u0430\u0432\u0438\u043b\u043e\u0441\u044c", "hypothesis_template": "{}."}]}
cointegrated/rubert-tiny-bilingual-nli
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "rubert", "russian", "nli", "rte", "zero-shot-classification", "ru", "dataset:cointegrated/nli-rus-translated-v2021", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #region-us
# RuBERT-tiny for NLI (natural language inference) This is the cointegrated/rubert-tiny model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment. For more details, see the card for a related model: URL
[ "# RuBERT-tiny for NLI (natural language inference)\n\nThis is the cointegrated/rubert-tiny model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.\n\nFor more details, see the card for a related model: URL" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #rubert #russian #nli #rte #zero-shot-classification #ru #dataset-cointegrated/nli-rus-translated-v2021 #autotrain_compatible #endpoints_compatible #region-us \n", "# RuBERT-tiny for NLI (natural language inference)\n\nThis is the cointegrated/rubert-tiny model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.\n\nFor more details, see the card for a related model: URL" ]
text-classification
transformers
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of sentiment for short Russian texts. The problem is formulated as multiclass classification: `negative` vs `neutral` vs `positive`. ## Usage The function below estimates the sentiment of the given text: ```python # !pip install transformers sentencepiece --quiet import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_checkpoint = 'cointegrated/rubert-tiny-sentiment-balanced' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) if torch.cuda.is_available(): model.cuda() def get_sentiment(text, return_type='label'): """ Calculate sentiment of a text. `return_type` can be 'label', 'score' or 'proba' """ with torch.no_grad(): inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device) proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()[0] if return_type == 'label': return model.config.id2label[proba.argmax()] elif return_type == 'score': return proba.dot([-1, 0, 1]) return proba text = 'Какая гадость эта ваша заливная рыба!' # classify the text print(get_sentiment(text, 'label')) # negative # score the text on the scale from -1 (very negative) to +1 (very positive) print(get_sentiment(text, 'score')) # -0.5894946306943893 # calculate probabilities of all labels print(get_sentiment(text, 'proba')) # [0.7870447 0.4947824 0.19755007] ``` ## Training We trained the model on [the datasets collected by Smetanin](https://github.com/sismetanin/sentiment-analysis-in-russian). We have converted all training data into a 3-class format and have up- and downsampled the training data to balance both the sources and the classes. The training code is available as [a Colab notebook](https://gist.github.com/avidale/e678c5478086c1d1adc52a85cb2b93e6). The metrics on the balanced test set are the following: | Source | Macro F1 | | ----------- | ----------- | | SentiRuEval2016_banks | 0.83 | | SentiRuEval2016_tele | 0.74 | | kaggle_news | 0.66 | | linis | 0.50 | | mokoron | 0.98 | | rureviews | 0.72 | | rusentiment | 0.67 |
{"language": ["ru"], "tags": ["russian", "classification", "sentiment", "multiclass"], "widget": [{"text": "\u041a\u0430\u043a\u0430\u044f \u0433\u0430\u0434\u043e\u0441\u0442\u044c \u044d\u0442\u0430 \u0432\u0430\u0448\u0430 \u0437\u0430\u043b\u0438\u0432\u043d\u0430\u044f \u0440\u044b\u0431\u0430!"}]}
cointegrated/rubert-tiny-sentiment-balanced
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "russian", "classification", "sentiment", "multiclass", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #russian #classification #sentiment #multiclass #ru #autotrain_compatible #endpoints_compatible #region-us
This is the cointegrated/rubert-tiny model fine-tuned for classification of sentiment for short Russian texts. The problem is formulated as multiclass classification: 'negative' vs 'neutral' vs 'positive'. Usage ----- The function below estimates the sentiment of the given text: Training -------- We trained the model on the datasets collected by Smetanin. We have converted all training data into a 3-class format and have up- and downsampled the training data to balance both the sources and the classes. The training code is available as a Colab notebook. The metrics on the balanced test set are the following:
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #russian #classification #sentiment #multiclass #ru #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of toxicity and inappropriateness for short informal Russian texts, such as comments in social networks. The problem is formulated as multilabel classification with the following classes: - `non-toxic`: the text does NOT contain insults, obscenities, and threats, in the sense of the [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) competition. - `insult` - `obscenity` - `threat` - `dangerous`: the text is inappropriate, in the sense of [Babakov et.al.](https://arxiv.org/abs/2103.05345), i.e. it can harm the reputation of the speaker. A text can be considered safe if it is BOTH `non-toxic` and NOT `dangerous`. ## Usage The function below estimates the probability that the text is either toxic OR dangerous: ```python # !pip install transformers sentencepiece --quiet import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_checkpoint = 'cointegrated/rubert-tiny-toxicity' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) if torch.cuda.is_available(): model.cuda() def text2toxicity(text, aggregate=True): """ Calculate toxicity of a text (if aggregate=True) or a vector of toxicity aspects (if aggregate=False)""" with torch.no_grad(): inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device) proba = torch.sigmoid(model(**inputs).logits).cpu().numpy() if isinstance(text, str): proba = proba[0] if aggregate: return 1 - proba.T[0] * (1 - proba.T[-1]) return proba print(text2toxicity('я люблю нигеров', True)) # 0.9350118728093193 print(text2toxicity('я люблю нигеров', False)) # [0.9715758 0.0180863 0.0045551 0.00189755 0.9331106 ] print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], True)) # [0.93501186 0.04156357] print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], False)) # [[9.7157580e-01 1.8086294e-02 4.5550885e-03 1.8975559e-03 9.3311059e-01] # [9.9979788e-01 1.9048342e-04 1.5297388e-04 1.7452303e-04 4.1369814e-02]] ``` ## Training The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, the learning rate of `1e-5`, and batch size of `64` for `15` epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is: ``` non-toxic : 0.9937 insult : 0.9912 obscenity : 0.9881 threat : 0.9910 dangerous : 0.8295 ```
{"language": ["ru"], "tags": ["russian", "classification", "toxicity", "multilabel"], "widget": [{"text": "\u0418\u0434\u0438 \u0442\u044b \u043d\u0430\u0444\u0438\u0433!"}]}
cointegrated/rubert-tiny-toxicity
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "russian", "classification", "toxicity", "multilabel", "ru", "arxiv:2103.05345", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2103.05345" ]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #russian #classification #toxicity #multilabel #ru #arxiv-2103.05345 #autotrain_compatible #endpoints_compatible #has_space #region-us
This is the cointegrated/rubert-tiny model fine-tuned for classification of toxicity and inappropriateness for short informal Russian texts, such as comments in social networks. The problem is formulated as multilabel classification with the following classes: - 'non-toxic': the text does NOT contain insults, obscenities, and threats, in the sense of the OK ML Cup competition. - 'insult' - 'obscenity' - 'threat' - 'dangerous': the text is inappropriate, in the sense of Babakov URL., i.e. it can harm the reputation of the speaker. A text can be considered safe if it is BOTH 'non-toxic' and NOT 'dangerous'. ## Usage The function below estimates the probability that the text is either toxic OR dangerous: ## Training The model has been trained on the joint dataset of OK ML Cup and Babakov URL. with 'Adam' optimizer, the learning rate of '1e-5', and batch size of '64' for '15' epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:
[ "## Usage\n\nThe function below estimates the probability that the text is either toxic OR dangerous:", "## Training\n\nThe model has been trained on the joint dataset of OK ML Cup and Babakov URL. with 'Adam' optimizer, the learning rate of '1e-5', and batch size of '64' for '15' epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #russian #classification #toxicity #multilabel #ru #arxiv-2103.05345 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Usage\n\nThe function below estimates the probability that the text is either toxic OR dangerous:", "## Training\n\nThe model has been trained on the joint dataset of OK ML Cup and Babakov URL. with 'Adam' optimizer, the learning rate of '1e-5', and batch size of '64' for '15' epochs. A text was considered inappropriate if its inappropriateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:" ]
fill-mask
transformers
This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters). There is also an **updated version of this model**, [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2), with a larger vocabulary and better quality on practically all Russian NLU tasks. This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English. It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE. There is a more detailed [description in Russian](https://habr.com/ru/post/562064/). Sentence embeddings can be produced as follows: ```python # pip install transformers sentencepiece import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny") model = AutoModel.from_pretrained("cointegrated/rubert-tiny") # model.cuda() # uncomment it if you have a GPU def embed_bert_cls(text, model, tokenizer): t = tokenizer(text, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**{k: v.to(model.device) for k, v in t.items()}) embeddings = model_output.last_hidden_state[:, 0, :] embeddings = torch.nn.functional.normalize(embeddings) return embeddings[0].cpu().numpy() print(embed_bert_cls('привет мир', model, tokenizer).shape) # (312,) ```
{"language": ["ru", "en"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity"], "widget": [{"text": "\u041c\u0438\u043d\u0438\u0430\u0442\u044e\u0440\u043d\u0430\u044f \u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f [MASK] \u0440\u0430\u0437\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447."}], "pipeline_tag": "fill-mask"}
cointegrated/rubert-tiny
null
[ "transformers", "pytorch", "safetensors", "bert", "pretraining", "russian", "fill-mask", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "ru", "en", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru", "en" ]
TAGS #transformers #pytorch #safetensors #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #feature-extraction #sentence-similarity #ru #en #license-mit #endpoints_compatible #has_space #region-us
This is a very small distilled version of the bert-base-multilingual-cased model for Russian and English (45 MB, 12M parameters). There is also an updated version of this model, rubert-tiny2, with a larger vocabulary and better quality on practically all Russian NLU tasks. This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its '[CLS]' embeddings can be used as a sentence representation aligned between Russian and English. It was trained on the Yandex Translate corpus, OPUS-100 and Tatoeba, using MLM loss (distilled from bert-base-multilingual-cased), translation ranking loss, and '[CLS]' embeddings distilled from LaBSE, rubert-base-cased-sentence, Laser and USE. There is a more detailed description in Russian. Sentence embeddings can be produced as follows:
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #feature-extraction #sentence-similarity #ru #en #license-mit #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
This is the [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for classification of emotions in Russian sentences. The task is multilabel classification, because one sentence can contain multiple emotions. The model on the [CEDR dataset](https://huggingface.co/datasets/cedr) described in the paper ["Data-Driven Model for Emotion Detection in Russian Texts"](https://doi.org/10.1016/j.procs.2021.06.075) by Sboev et al. The model has been trained with Adam optimizer for 40 epochs with learning rate `1e-5` and batch size 64 [in this notebook](https://colab.research.google.com/drive/1AFW70EJaBn7KZKRClDIdDUpbD46cEsat?usp=sharing). The quality of the predicted probabilities on the test dataset is the following: | label | no emotion | joy |sadness |surprise| fear |anger | mean | mean (emotions) | |----------|------------|--------|--------|--------|--------|--------| --------| ----------------| | AUC | 0.9286 | 0.9512 | 0.9564 | 0.8908 | 0.8955 | 0.7511 | 0.8956 | 0.8890 | | F1 micro | 0.8624 | 0.9389 | 0.9362 | 0.9469 | 0.9575 | 0.9261 | 0.9280 | 0.9411 | | F1 macro | 0.8562 | 0.8962 | 0.9017 | 0.8366 | 0.8359 | 0.6820 | 0.8348 | 0.8305 |
{"language": ["ru"], "tags": ["russian", "classification", "sentiment", "emotion-classification", "multiclass"], "datasets": ["cedr"], "widget": [{"text": "\u0411\u0435\u0441\u0438\u0448\u044c \u043c\u0435\u043d\u044f, \u043f\u0430\u0434\u043b\u0430"}, {"text": "\u041a\u0430\u043a \u0437\u0434\u043e\u0440\u043e\u0432\u043e, \u0447\u0442\u043e \u0432\u0441\u0435 \u043c\u044b \u0437\u0434\u0435\u0441\u044c \u0441\u0435\u0433\u043e\u0434\u043d\u044f \u0441\u043e\u0431\u0440\u0430\u043b\u0438\u0441\u044c"}, {"text": "\u041a\u0430\u043a-\u0442\u043e \u0441\u0442\u0440\u0451\u043c\u043d\u043e, \u0434\u0430\u0432\u0430\u0439 \u0441\u0432\u0430\u043b\u0438\u043c \u043e\u0442\u0441\u044e\u0434\u0430?"}, {"text": "\u0413\u0440\u0443\u0441\u0442\u044c-\u0442\u043e\u0441\u043a\u0430 \u043c\u0435\u043d\u044f \u0441\u044a\u0435\u0434\u0430\u0435\u0442"}, {"text": "\u0414\u0430\u043d\u043d\u044b\u0439 \u0444\u0440\u0430\u0433\u043c\u0435\u043d\u0442 \u0442\u0435\u043a\u0441\u0442\u0430 \u043d\u0435 \u0441\u043e\u0434\u0435\u0440\u0436\u0438\u0442 \u0430\u0431\u0441\u043e\u043b\u044e\u0442\u043d\u043e \u043d\u0438\u043a\u0430\u043a\u0438\u0445 \u044d\u043c\u043e\u0446\u0438\u0439"}, {"text": "\u041d\u0438\u0444\u0438\u0433\u0430 \u0441\u0435\u0431\u0435, \u043d\u0435\u0443\u0436\u0435\u043b\u0438 \u0442\u0430\u043a \u0442\u043e\u0436\u0435 \u0431\u044b\u0432\u0430\u0435\u0442!"}]}
cointegrated/rubert-tiny2-cedr-emotion-detection
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "russian", "classification", "sentiment", "emotion-classification", "multiclass", "ru", "dataset:cedr", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #russian #classification #sentiment #emotion-classification #multiclass #ru #dataset-cedr #autotrain_compatible #endpoints_compatible #has_space #region-us
This is the cointegrated/rubert-tiny2 model fine-tuned for classification of emotions in Russian sentences. The task is multilabel classification, because one sentence can contain multiple emotions. The model on the CEDR dataset described in the paper "Data-Driven Model for Emotion Detection in Russian Texts" by Sboev et al. The model has been trained with Adam optimizer for 40 epochs with learning rate '1e-5' and batch size 64 in this notebook. The quality of the predicted probabilities on the test dataset is the following:
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #russian #classification #sentiment #emotion-classification #multiclass #ru #dataset-cedr #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
sentence-similarity
sentence-transformers
This is an updated version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny): a small Russian BERT-based encoder with high-quality sentence embeddings. This [post in Russian](https://habr.com/ru/post/669674/) gives more details. The differences from the previous version include: - a larger vocabulary: 83828 tokens instead of 29564; - larger supported sequences: 2048 instead of 512; - sentence embeddings approximate LaBSE closer than before; - meaningful segment embeddings (tuned on the NLI task) - the model is focused only on Russian. The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task. Sentence embeddings can be produced as follows: ```python # pip install transformers sentencepiece import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny2") model = AutoModel.from_pretrained("cointegrated/rubert-tiny2") # model.cuda() # uncomment it if you have a GPU def embed_bert_cls(text, model, tokenizer): t = tokenizer(text, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**{k: v.to(model.device) for k, v in t.items()}) embeddings = model_output.last_hidden_state[:, 0, :] embeddings = torch.nn.functional.normalize(embeddings) return embeddings[0].cpu().numpy() print(embed_bert_cls('привет мир', model, tokenizer).shape) # (312,) ``` Alternatively, you can use the model with `sentence_transformers`: ```Python from sentence_transformers import SentenceTransformer model = SentenceTransformer('cointegrated/rubert-tiny2') sentences = ["привет мир", "hello world", "здравствуй вселенная"] embeddings = model.encode(sentences) print(embeddings) ```
{"language": ["ru"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "sentence-transformers", "transformers"], "pipeline_tag": "sentence-similarity", "widget": [{"text": "\u041c\u0438\u043d\u0438\u0430\u0442\u044e\u0440\u043d\u0430\u044f \u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f [MASK] \u0440\u0430\u0437\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447."}]}
cointegrated/rubert-tiny2
null
[ "sentence-transformers", "pytorch", "safetensors", "bert", "pretraining", "russian", "fill-mask", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "transformers", "ru", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #sentence-transformers #pytorch #safetensors #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #feature-extraction #sentence-similarity #transformers #ru #license-mit #endpoints_compatible #has_space #region-us
This is an updated version of cointegrated/rubert-tiny: a small Russian BERT-based encoder with high-quality sentence embeddings. This post in Russian gives more details. The differences from the previous version include: - a larger vocabulary: 83828 tokens instead of 29564; - larger supported sequences: 2048 instead of 512; - sentence embeddings approximate LaBSE closer than before; - meaningful segment embeddings (tuned on the NLI task) - the model is focused only on Russian. The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task. Sentence embeddings can be produced as follows: Alternatively, you can use the model with 'sentence_transformers':
[]
[ "TAGS\n#sentence-transformers #pytorch #safetensors #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #feature-extraction #sentence-similarity #transformers #ru #license-mit #endpoints_compatible #has_space #region-us \n" ]
summarization
transformers
This is a model for abstractive Russian summarization, based on [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) and fine-tuned on 4 datasets. It can be used as follows: ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'cointegrated/rut5-base-absum' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def summarize( text, n_words=None, compression=None, max_length=1000, num_beams=3, do_sample=False, repetition_penalty=10.0, **kwargs ): """ Summarize the text The following parameters are mutually exclusive: - n_words (int) is an approximate number of words to generate. - compression (float) is an approximate length ratio of summary and original text. """ if n_words: text = '[{}] '.format(n_words) + text elif compression: text = '[{0:.1g}] '.format(compression) + text x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) with torch.inference_mode(): out = model.generate( **x, max_length=max_length, num_beams=num_beams, do_sample=do_sample, repetition_penalty=repetition_penalty, **kwargs ) return tokenizer.decode(out[0], skip_special_tokens=True) text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.""" print(summarize(text)) # Эйфелева башня достигла высоты 300 метров. print(summarize(text, n_words=10)) # Французская Эйфелева башня достигла высоты 300 метров. ```
{"language": ["ru"], "license": "mit", "tags": ["russian", "summarization"], "datasets": ["IlyaGusev/gazeta", "csebuetnlp/xlsum", "mlsum", "wiki_lingua"], "widget": [{"text": "\u0412\u044b\u0441\u043e\u0442\u0430 \u0431\u0430\u0448\u043d\u0438 \u0441\u043e\u0441\u0442\u0430\u0432\u043b\u044f\u0435\u0442 324 \u043c\u0435\u0442\u0440\u0430 (1063 \u0444\u0443\u0442\u0430), \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e \u0442\u0430\u043a\u0430\u044f \u0436\u0435 \u0432\u044b\u0441\u043e\u0442\u0430, \u043a\u0430\u043a \u0443 81-\u044d\u0442\u0430\u0436\u043d\u043e\u0433\u043e \u0437\u0434\u0430\u043d\u0438\u044f, \u0438 \u0441\u0430\u043c\u043e\u0435 \u0432\u044b\u0441\u043e\u043a\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u0432 \u041f\u0430\u0440\u0438\u0436\u0435. \u0415\u0433\u043e \u043e\u0441\u043d\u043e\u0432\u0430\u043d\u0438\u0435 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u043d\u043e, \u0440\u0430\u0437\u043c\u0435\u0440\u043e\u043c 125 \u043c\u0435\u0442\u0440\u043e\u0432 (410 \u0444\u0443\u0442\u043e\u0432) \u0441 \u043b\u044e\u0431\u043e\u0439 \u0441\u0442\u043e\u0440\u043e\u043d\u044b. \u0412\u043e \u0432\u0440\u0435\u043c\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u0430 \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u043f\u0440\u0435\u0432\u0437\u043e\u0448\u043b\u0430 \u043c\u043e\u043d\u0443\u043c\u0435\u043d\u0442 \u0412\u0430\u0448\u0438\u043d\u0433\u0442\u043e\u043d\u0430, \u0441\u0442\u0430\u0432 \u0441\u0430\u043c\u044b\u043c \u0432\u044b\u0441\u043e\u043a\u0438\u043c \u0438\u0441\u043a\u0443\u0441\u0441\u0442\u0432\u0435\u043d\u043d\u044b\u043c \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435\u043c \u0432 \u043c\u0438\u0440\u0435, \u0438 \u044d\u0442\u043e\u0442 \u0442\u0438\u0442\u0443\u043b \u043e\u043d\u0430 \u0443\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b\u0430 \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 41 \u0433\u043e\u0434\u0430 \u0434\u043e \u0437\u0430\u0432\u0435\u0440\u0448\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u0432 \u041d\u044c\u044e-\u0419\u043e\u0440\u043a\u0435 \u0432 1930 \u0433\u043e\u0434\u0443. \u042d\u0442\u043e \u043f\u0435\u0440\u0432\u043e\u0435 \u0441\u043e\u043e\u0440\u0443\u0436\u0435\u043d\u0438\u0435 \u043a\u043e\u0442\u043e\u0440\u043e\u0435 \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u043e \u0432\u044b\u0441\u043e\u0442\u044b 300 \u043c\u0435\u0442\u0440\u043e\u0432. \u0418\u0437-\u0437\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0432\u0435\u0449\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0439 \u0430\u043d\u0442\u0435\u043d\u043d\u044b \u043d\u0430 \u0432\u0435\u0440\u0448\u0438\u043d\u0435 \u0431\u0430\u0448\u043d\u0438 \u0432 1957 \u0433\u043e\u0434\u0443 \u043e\u043d\u0430 \u0441\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0448\u0435 \u0437\u0434\u0430\u043d\u0438\u044f \u041a\u0440\u0430\u0439\u0441\u043b\u0435\u0440 \u043d\u0430 5,2 \u043c\u0435\u0442\u0440\u0430 (17 \u0444\u0443\u0442\u043e\u0432). \u0417\u0430 \u0438\u0441\u043a\u043b\u044e\u0447\u0435\u043d\u0438\u0435\u043c \u043f\u0435\u0440\u0435\u0434\u0430\u0442\u0447\u0438\u043a\u043e\u0432, \u042d\u0439\u0444\u0435\u043b\u0435\u0432\u0430 \u0431\u0430\u0448\u043d\u044f \u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u0432\u0442\u043e\u0440\u043e\u0439 \u0441\u0430\u043c\u043e\u0439 \u0432\u044b\u0441\u043e\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e \u0441\u0442\u043e\u044f\u0449\u0435\u0439 \u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u043e\u0439 \u0432\u043e \u0424\u0440\u0430\u043d\u0446\u0438\u0438 \u043f\u043e\u0441\u043b\u0435 \u0432\u0438\u0430\u0434\u0443\u043a\u0430 \u041c\u0438\u0439\u043e."}]}
cointegrated/rut5-base-absum
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "russian", "summarization", "ru", "dataset:IlyaGusev/gazeta", "dataset:csebuetnlp/xlsum", "dataset:mlsum", "dataset:wiki_lingua", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #russian #summarization #ru #dataset-IlyaGusev/gazeta #dataset-csebuetnlp/xlsum #dataset-mlsum #dataset-wiki_lingua #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This is a model for abstractive Russian summarization, based on cointegrated/rut5-base-multitask and fine-tuned on 4 datasets. It can be used as follows:
[]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #russian #summarization #ru #dataset-IlyaGusev/gazeta #dataset-csebuetnlp/xlsum #dataset-mlsum #dataset-wiki_lingua #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left. More details are given in a Russian post: https://habr.com/ru/post/581932/ The model has been fine-tuned for several tasks with sentences or short paragraphs: * translation (`translate ru-en` and `translate en-ru`) * Paraphrasing (`paraphrase`) * Filling gaps in a text (`fill`). The gaps can be denoted as `___` or `_3_`, where `3` is the approximate number of words that should be inserted. * Restoring the text from a noisy bag of words (`assemble`) * Simplification of texts (`simplify`) * Dialogue response generation (`reply` based on fiction and `answer` based on online forums) * Open-book question answering (`comprehend`) * Asking questions about a text (`ask`) * News title generation (`headline`) For each task, the task name is joined with the input text by the ` | ` separator. The model can be run with the following code: ``` # !pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-base-multitask") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-base-multitask") def generate(text, **kwargs): inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate(**inputs, num_beams=5, **kwargs) return tokenizer.decode(hypotheses[0], skip_special_tokens=True) ``` The model can be applied to each of the pretraining tasks: ``` print(generate('translate ru-en | Каждый охотник желает знать, где сидит фазан.')) # Each hunter wants to know, where he is. print(generate('paraphrase | Каждый охотник желает знать, где сидит фазан.', encoder_no_repeat_ngram_size=1, repetition_penalty=0.5, no_repeat_ngram_size=1)) # В любом случае каждый рыбак мечтает познакомиться со своей фермой print(generate('fill | Каждый охотник _3_, где сидит фазан.')) # смотрит на озеро print(generate('assemble | охотник каждый знать фазан сидит')) # Каждый охотник знает, что фазан сидит. print(generate('simplify | Местным продуктом-специалитетом с защищённым географическим наименованием по происхождению считается люнебургский степной барашек.', max_length=32)) # Местным продуктом-специалитетом считается люнебургский степной барашек. print(generate('reply | Помогите мне закадрить девушку')) # Что я хочу? print(generate('answer | Помогите мне закадрить девушку')) # я хочу познакомиться с девушкой!!!!!!!! print(generate("comprehend | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо. Вопрос: откуда приехал Морган?")) # из Австралии print(generate("ask | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32)) # Что разворачивается на фоне земельного конфликта между владельцами овец и ранчеро? print(generate("headline | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32)) # На фоне земельного конфликта разворачивается история любви овцевода Моргана Лейна и Марии Синглетон ``` However, it is strongly recommended that you fine tune the model for your own task.
{"language": ["ru", "en"], "license": "mit", "tags": ["russian"], "widget": [{"text": "fill | \u041f\u043e\u0447\u0435\u043c\u0443 \u043e\u043d\u0438 \u043d\u0435 ___ \u043d\u0430 \u043c\u0435\u043d\u044f?"}]}
cointegrated/rut5-base-multitask
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "russian", "ru", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru", "en" ]
TAGS #transformers #pytorch #jax #safetensors #t5 #text2text-generation #russian #ru #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This is a smaller version of the google/mt5-base with only some Rusian and English embeddings left. More details are given in a Russian post: URL The model has been fine-tuned for several tasks with sentences or short paragraphs: * translation ('translate ru-en' and 'translate en-ru') * Paraphrasing ('paraphrase') * Filling gaps in a text ('fill'). The gaps can be denoted as '___' or '_3_', where '3' is the approximate number of words that should be inserted. * Restoring the text from a noisy bag of words ('assemble') * Simplification of texts ('simplify') * Dialogue response generation ('reply' based on fiction and 'answer' based on online forums) * Open-book question answering ('comprehend') * Asking questions about a text ('ask') * News title generation ('headline') For each task, the task name is joined with the input text by the ' | ' separator. The model can be run with the following code: The model can be applied to each of the pretraining tasks: However, it is strongly recommended that you fine tune the model for your own task.
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #t5 #text2text-generation #russian #ru #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a paraphraser for Russian sentences described [in this Habr post](https://habr.com/ru/post/564916/). It is recommended to use the model with the `encoder_no_repeat_ngram_size` argument: ``` from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'cointegrated/rut5-base-paraphraser' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def paraphrase(text, beams=5, grams=4, do_sample=False): x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) max_size = int(x.input_ids.shape[1] * 1.5 + 10) out = model.generate(**x, encoder_no_repeat_ngram_size=grams, num_beams=beams, max_length=max_size, do_sample=do_sample) return tokenizer.decode(out[0], skip_special_tokens=True) print(paraphrase('Каждый охотник желает знать, где сидит фазан.')) # Все охотники хотят знать где фазан сидит. ```
{"language": ["ru"], "license": "mit", "tags": ["russian", "paraphrasing", "paraphraser", "paraphrase"], "datasets": ["cointegrated/ru-paraphrase-NMT-Leipzig"], "widget": [{"text": "\u041a\u0430\u0436\u0434\u044b\u0439 \u043e\u0445\u043e\u0442\u043d\u0438\u043a \u0436\u0435\u043b\u0430\u0435\u0442 \u0437\u043d\u0430\u0442\u044c, \u0433\u0434\u0435 \u0441\u0438\u0434\u0438\u0442 \u0444\u0430\u0437\u0430\u043d."}]}
cointegrated/rut5-base-paraphraser
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "russian", "paraphrasing", "paraphraser", "paraphrase", "ru", "dataset:cointegrated/ru-paraphrase-NMT-Leipzig", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #russian #paraphrasing #paraphraser #paraphrase #ru #dataset-cointegrated/ru-paraphrase-NMT-Leipzig #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
This is a paraphraser for Russian sentences described in this Habr post. It is recommended to use the model with the 'encoder_no_repeat_ngram_size' argument:
[]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #russian #paraphrasing #paraphraser #paraphrase #ru #dataset-cointegrated/ru-paraphrase-NMT-Leipzig #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only Russian and some English embeddings left. * The original model has 582M parameters, with 384M of them being input and output embeddings. * After shrinking the `sentencepiece` vocabulary from 250K to 30K (top 10K English and top 20K Russian tokens) the number of model parameters reduced to 244M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. The creation of this model is described in the post [How to adapt a multilingual T5 model for a single language](https://cointegrated.medium.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) along with the source code.
{"language": ["ru", "en", "multilingual"], "license": "mit", "tags": ["russian"]}
cointegrated/rut5-base
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "russian", "ru", "en", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru", "en", "multilingual" ]
TAGS #transformers #pytorch #jax #safetensors #t5 #text2text-generation #russian #ru #en #multilingual #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a smaller version of the google/mt5-base model with only Russian and some English embeddings left. * The original model has 582M parameters, with 384M of them being input and output embeddings. * After shrinking the 'sentencepiece' vocabulary from 250K to 30K (top 10K English and top 20K Russian tokens) the number of model parameters reduced to 244M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one. The creation of this model is described in the post How to adapt a multilingual T5 model for a single language along with the source code.
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #t5 #text2text-generation #russian #ru #en #multilingual #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a version of the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) model fine-tuned on some Russian dialogue data. It is not very smart and creative, but it is small and fast, and can serve as a fallback response generator for some chatbot or can be fine-tuned to imitate the style of someone. The input of the model is the previous dialogue utterances separated by `'\n\n'`, and the output is the next utterance. The model can be used as follows: ``` # !pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-chitchat") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-chitchat") text = 'Привет! Расскажи, как твои дела?' inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate( **inputs, do_sample=True, top_p=0.5, num_return_sequences=3, repetition_penalty=2.5, max_length=32, ) for h in hypotheses: print(tokenizer.decode(h, skip_special_tokens=True)) # Как обычно. # Сейчас - в порядке. # Хорошо. # Wall time: 363 ms ```
{"language": "ru", "license": "mit", "tags": ["dialogue", "russian"]}
cointegrated/rut5-small-chitchat
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "dialogue", "russian", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #dialogue #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a version of the cointegrated/rut5-small model fine-tuned on some Russian dialogue data. It is not very smart and creative, but it is small and fast, and can serve as a fallback response generator for some chatbot or can be fine-tuned to imitate the style of someone. The input of the model is the previous dialogue utterances separated by ''\n\n'', and the output is the next utterance. The model can be used as follows:
[]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #dialogue #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
A version of https://huggingface.co/cointegrated/rut5-small-chitchat which is more dull but less toxic.
{}
cointegrated/rut5-small-chitchat2
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A version of URL which is more dull but less toxic.
[]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a small Russian denoising autoencoder. It can be used for restoring corrupted sentences. This model was produced by fine-tuning the [rut5-small](https://huggingface.co/cointegrated/rut5-small) model on the task of reconstructing a sentence: * restoring word positions (after slightly shuffling them) * restoring dropped words and punctuation marks (after dropping some of them randomly) * restoring inflection of words (after changing their inflection randomly using [natasha](https://github.com/natasha/natasha) and [pymorphy2](https://github.com/kmike/pymorphy2) packages) The fine-tuning was performed on a [Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download/Russian) of Russian sentences. The model can be applied as follows: ``` # !pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-normalizer") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-normalizer") text = 'меня тобой не понимать' inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate( **inputs, do_sample=True, top_p=0.95, num_return_sequences=5, repetition_penalty=2.5, max_length=32, ) for h in hypotheses: print(tokenizer.decode(h, skip_special_tokens=True)) ``` A possible output is: ``` # Мне тебя не понимать. # Если бы ты понимаешь меня? # Я с тобой не понимаю. # Я тебя не понимаю. # Я не понимаю о чем ты. ```
{"language": "ru", "license": "mit", "tags": ["normalization", "denoising autoencoder", "russian"], "widget": [{"text": "\u043c\u0435\u043d\u044f \u0442\u043e\u0431\u043e\u0439 \u043d\u0435 \u043f\u043e\u043d\u0438\u043c\u0430\u0442\u044c"}]}
cointegrated/rut5-small-normalizer
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "normalization", "denoising autoencoder", "russian", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #jax #safetensors #t5 #text2text-generation #normalization #denoising autoencoder #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a small Russian denoising autoencoder. It can be used for restoring corrupted sentences. This model was produced by fine-tuning the rut5-small model on the task of reconstructing a sentence: * restoring word positions (after slightly shuffling them) * restoring dropped words and punctuation marks (after dropping some of them randomly) * restoring inflection of words (after changing their inflection randomly using natasha and pymorphy2 packages) The fine-tuning was performed on a Leipzig web corpus of Russian sentences. The model can be applied as follows: A possible output is:
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #t5 #text2text-generation #normalization #denoising autoencoder #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This is a small Russian paraphraser based on the [google/mt5-small](https://huggingface.co/google/mt5-small) model. It has rather poor paraphrasing performance, but can be fine tuned for this or other tasks. This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent. * The original model has 300M parameters, with 256M of them being input and output embeddings. * After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced to 65M parameters, and model size reduced from 1.1GB to 246MB. * The first 5K tokens in the new vocabulary are taken from the original `mt5-small`. * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian). The model can be used as follows: ``` # !pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small") text = 'Ехал Грека через реку, видит Грека в реке рак. ' inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate( **inputs, do_sample=True, top_p=0.95, num_return_sequences=10, repetition_penalty=2.5, max_length=32, ) for h in hypotheses: print(tokenizer.decode(h, skip_special_tokens=True)) ```
{"language": "ru", "license": "mit", "tags": ["paraphrasing", "russian"]}
cointegrated/rut5-small
null
[ "transformers", "pytorch", "jax", "safetensors", "mt5", "text2text-generation", "paraphrasing", "russian", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #jax #safetensors #mt5 #text2text-generation #paraphrasing #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a small Russian paraphraser based on the google/mt5-small model. It has rather poor paraphrasing performance, but can be fine tuned for this or other tasks. This model was created by taking the alenusch/mt5small-ruparaphraser model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent. * The original model has 300M parameters, with 256M of them being input and output embeddings. * After shrinking the 'sentencepiece' vocabulary from 250K to 20K the number of model parameters reduced to 65M parameters, and model size reduced from 1.1GB to 246MB. * The first 5K tokens in the new vocabulary are taken from the original 'mt5-small'. * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the Leipzig corpora collection. The model can be used as follows:
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #mt5 #text2text-generation #paraphrasing #russian #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-address-ner This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Precision: 0.9664 - Recall: 0.9774 - F1: 0.9719 - Accuracy: 0.9758 ## Model description 输入一串地址中文信息,比如快递单:`北京市海淀区西北旺东路10号院(马连洼街道西北旺社区东北方向)`,按照行政级别(总有 7 级)抽取地址信息,返回每个 token 的类别。具体类别含义表示如下: | 返回类别 | BIO 体系 | 解释 | | ----------- | -------- | ---------------------- | | **LABEL_0** | O | 忽略信息 | | **LABEL_1** | B-A1 | 第一级地址(头) | | **LABEL_2** | I-A1 | 第一级地址(其余部分) | | ... | ... | ... | More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 2.5055 | 1.0 | 7 | 1.6719 | 0.1977 | 0.2604 | 0.2248 | 0.5649 | | 1.837 | 2.0 | 14 | 1.0719 | 0.4676 | 0.6 | 0.5256 | 0.7421 | | 1.0661 | 3.0 | 21 | 0.7306 | 0.6266 | 0.7472 | 0.6816 | 0.8106 | | 0.8373 | 4.0 | 28 | 0.5197 | 0.6456 | 0.8113 | 0.7191 | 0.8614 | | 0.522 | 5.0 | 35 | 0.3830 | 0.7667 | 0.8679 | 0.8142 | 0.9001 | | 0.4295 | 6.0 | 42 | 0.3104 | 0.8138 | 0.8906 | 0.8505 | 0.9178 | | 0.3483 | 7.0 | 49 | 0.2453 | 0.8462 | 0.9132 | 0.8784 | 0.9404 | | 0.2471 | 8.0 | 56 | 0.2081 | 0.8403 | 0.9132 | 0.8752 | 0.9428 | | 0.2299 | 9.0 | 63 | 0.1979 | 0.8419 | 0.9245 | 0.8813 | 0.9420 | | 0.1761 | 10.0 | 70 | 0.1823 | 0.8830 | 0.9396 | 0.9104 | 0.9500 | | 0.1434 | 11.0 | 77 | 0.1480 | 0.9036 | 0.9547 | 0.9284 | 0.9629 | | 0.134 | 12.0 | 84 | 0.1341 | 0.9173 | 0.9623 | 0.9392 | 0.9678 | | 0.128 | 13.0 | 91 | 0.1365 | 0.9375 | 0.9623 | 0.9497 | 0.9694 | | 0.0824 | 14.0 | 98 | 0.1159 | 0.9557 | 0.9774 | 0.9664 | 0.9734 | | 0.0744 | 15.0 | 105 | 0.1092 | 0.9591 | 0.9736 | 0.9663 | 0.9766 | | 0.0569 | 16.0 | 112 | 0.1117 | 0.9556 | 0.9736 | 0.9645 | 0.9742 | | 0.0559 | 17.0 | 119 | 0.1040 | 0.9628 | 0.9774 | 0.9700 | 0.9790 | | 0.0456 | 18.0 | 126 | 0.1052 | 0.9593 | 0.9774 | 0.9682 | 0.9782 | | 0.0405 | 19.0 | 133 | 0.1133 | 0.9590 | 0.9698 | 0.9644 | 0.9718 | | 0.0315 | 20.0 | 140 | 0.1060 | 0.9591 | 0.9736 | 0.9663 | 0.9750 | | 0.0262 | 21.0 | 147 | 0.1087 | 0.9554 | 0.9698 | 0.9625 | 0.9718 | | 0.0338 | 22.0 | 154 | 0.1183 | 0.9625 | 0.9698 | 0.9662 | 0.9726 | | 0.0225 | 23.0 | 161 | 0.1080 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.028 | 24.0 | 168 | 0.1057 | 0.9591 | 0.9736 | 0.9663 | 0.9742 | | 0.0202 | 25.0 | 175 | 0.1062 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0168 | 26.0 | 182 | 0.1097 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.0173 | 27.0 | 189 | 0.1093 | 0.9628 | 0.9774 | 0.9700 | 0.9774 | | 0.0151 | 28.0 | 196 | 0.1162 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0135 | 29.0 | 203 | 0.1126 | 0.9483 | 0.9698 | 0.9590 | 0.9758 | | 0.0179 | 30.0 | 210 | 0.1100 | 0.9449 | 0.9698 | 0.9572 | 0.9774 | | 0.0161 | 31.0 | 217 | 0.1098 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0158 | 32.0 | 224 | 0.1191 | 0.9483 | 0.9698 | 0.9590 | 0.9734 | | 0.0151 | 33.0 | 231 | 0.1058 | 0.9483 | 0.9698 | 0.9590 | 0.9750 | | 0.0121 | 34.0 | 238 | 0.0990 | 0.9593 | 0.9774 | 0.9682 | 0.9790 | | 0.0092 | 35.0 | 245 | 0.1128 | 0.9519 | 0.9698 | 0.9607 | 0.9774 | | 0.0097 | 36.0 | 252 | 0.1181 | 0.9627 | 0.9736 | 0.9681 | 0.9766 | | 0.0118 | 37.0 | 259 | 0.1185 | 0.9591 | 0.9736 | 0.9663 | 0.9782 | | 0.0118 | 38.0 | 266 | 0.1021 | 0.9557 | 0.9774 | 0.9664 | 0.9823 | | 0.0099 | 39.0 | 273 | 0.1000 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0102 | 40.0 | 280 | 0.1025 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0068 | 41.0 | 287 | 0.1080 | 0.9522 | 0.9774 | 0.9646 | 0.9807 | | 0.0105 | 42.0 | 294 | 0.1157 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0083 | 43.0 | 301 | 0.1207 | 0.9380 | 0.9698 | 0.9536 | 0.9766 | | 0.0077 | 44.0 | 308 | 0.1208 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0077 | 45.0 | 315 | 0.1176 | 0.9483 | 0.9698 | 0.9590 | 0.9774 | | 0.0071 | 46.0 | 322 | 0.1137 | 0.9483 | 0.9698 | 0.9590 | 0.9790 | | 0.0075 | 47.0 | 329 | 0.1144 | 0.9483 | 0.9698 | 0.9590 | 0.9782 | | 0.0084 | 48.0 | 336 | 0.1198 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0103 | 49.0 | 343 | 0.1217 | 0.9519 | 0.9698 | 0.9607 | 0.9766 | | 0.0087 | 50.0 | 350 | 0.1230 | 0.9519 | 0.9698 | 0.9607 | 0.9766 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.0 - Datasets 1.9.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "chinese-address-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.975825946817083}}]}]}
jiaqianjing/chinese-address-ner
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
chinese-address-ner =================== This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 0.1080 * Precision: 0.9664 * Recall: 0.9774 * F1: 0.9719 * Accuracy: 0.9758 Model description ----------------- 输入一串地址中文信息,比如快递单:'北京市海淀区西北旺东路10号院(马连洼街道西北旺社区东北方向)',按照行政级别(总有 7 级)抽取地址信息,返回每个 token 的类别。具体类别含义表示如下: 返回类别: LABEL\_0, BIO 体系: O, 解释: 忽略信息 返回类别: LABEL\_1, BIO 体系: B-A1, 解释: 第一级地址(头) 返回类别: LABEL\_2, BIO 体系: I-A1, 解释: 第一级地址(其余部分) 返回类别: ..., BIO 体系: ..., 解释: ... More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 50 * eval\_batch\_size: 50 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 50 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.8.0 * Datasets 1.9.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 50\n* eval\\_batch\\_size: 50\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.0\n* Datasets 1.9.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 50\n* eval\\_batch\\_size: 50\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.8.0\n* Datasets 1.9.0\n* Tokenizers 0.10.3" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0975 | 1.0 | 291 | 1.7060 | | 1.648 | 2.0 | 582 | 1.4280 | | 1.4837 | 3.0 | 873 | 1.3980 | | 1.3978 | 4.0 | 1164 | 1.4040 | | 1.3314 | 5.0 | 1455 | 1.2032 | | 1.2954 | 6.0 | 1746 | 1.2814 | | 1.2448 | 7.0 | 2037 | 1.2635 | | 1.1983 | 8.0 | 2328 | 1.2071 | | 1.1849 | 9.0 | 2619 | 1.1675 | | 1.1414 | 10.0 | 2910 | 1.2095 | | 1.1314 | 11.0 | 3201 | 1.1858 | | 1.0943 | 12.0 | 3492 | 1.1658 | | 1.0838 | 13.0 | 3783 | 1.2336 | | 1.0733 | 14.0 | 4074 | 1.1606 | | 1.0627 | 15.0 | 4365 | 1.1188 | | 1.055 | 16.0 | 4656 | 1.2500 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-issues-128", "results": []}]}
coldfir3/bert-base-uncased-issues-128
null
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-issues-128 ============================ This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.2500 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 16 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 16", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 16", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.922 - F1: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8262 | 1.0 | 250 | 0.3073 | 0.904 | 0.9021 | | 0.2484 | 2.0 | 500 | 0.2175 | 0.922 | 0.9222 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.922, "name": "Accuracy"}, {"type": "f1", "value": 0.9222116474112371, "name": "F1"}]}]}]}
coldfir3/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2175 * Accuracy: 0.922 * F1: 0.9222 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1759 - F1: 0.8527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3038 | 1.0 | 835 | 0.1922 | 0.8065 | | 0.1559 | 2.0 | 1670 | 0.1714 | 0.8422 | | 0.1002 | 3.0 | 2505 | 0.1759 | 0.8527 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]}
coldfir3/xlm-roberta-base-finetuned-panx-all
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-all =================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1759 * F1: 0.8527 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1667 - F1: 0.8582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 | | 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 | | 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de-fr", "results": []}]}
coldfir3/xlm-roberta-base-finetuned-panx-de-fr
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de-fr ===================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1667 * F1: 0.8582 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3925 - F1: 0.7075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 | | 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 | | 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-en", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.en"}, "metrics": [{"type": "f1", "value": 0.7075365579302588, "name": "F1"}]}]}]}
coldfir3/xlm-roberta-base-finetuned-panx-en
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-en ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.3925 * F1: 0.7075 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2651 - F1: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 | | 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 | | 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-fr", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.fr"}, "metrics": [{"type": "f1", "value": 0.8354854938789199, "name": "F1"}]}]}]}
coldfir3/xlm-roberta-base-finetuned-panx-fr
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #base_model-xlm-roberta-base #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-fr ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.2651 * F1: 0.8355 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #base_model-xlm-roberta-base #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2323 - F1: 0.8228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 | | 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 | | 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-it", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.it"}, "metrics": [{"type": "f1", "value": 0.822805578342904, "name": "F1"}]}]}]}
coldfir3/xlm-roberta-base-finetuned-panx-it
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #base_model-xlm-roberta-base #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-it ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.2323 * F1: 0.8228 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #base_model-xlm-roberta-base #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
colochoplay/DialoGTP-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
fill-mask
transformers
# BERT base Japanese model This repository contains a BERT base model trained on Japanese Wikipedia dataset. ## Training data [Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of June 20, 2021 which is released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for training. The dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split. ## Model description The model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size. The vocabulary size is set to 32,000 instead of an original size of 30,522. For the model, `transformers.BertForPreTraining` is used. ## Tokenizer description [SentencePiece](https://github.com/google/sentencepiece) tokenizer is used as a tokenizer for this model. While training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split. The vocabulary size is set to 32,000. A `add_dummy_prefix` option is set to `True` because words are not separated by whitespaces in Japanese. After training, the model is imported to `transformers.DebertaV2Tokenizer` because it supports SentencePiece models and its behavior is consistent when `use_fast` option is set to `True` or `False`. **Note:** The meaning of "consistent" here is as follows. For example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast. Although `use_fast=False` option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to config.json or model card. Therefore unexpected behavior happens when using Inference API. To avoid this kind of problems, `transformers.DebertaV2Tokenizer` is used in this model. ## Training Training details are as follows. * gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32) * gradient clip norm is 1.0 * Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps * The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. The training continued until validation loss got worse. Totally the number of training steps were around 214k. The test set loss was 2.80 . Training code is available in [a GitHub repository](https://github.com/colorfulscoop/bert-ja). ## Usage First, install dependecies. ```sh $ pip install torch==1.8.0 transformers==4.8.2 sentencepiece==0.1.95 ``` Then use `transformers.pipeline` to try mask fill task. ```sh >>> import transformers >>> pipeline = transformers.pipeline("fill-mask", "colorfulscoop/bert-base-ja", revision="v1.0") >>> pipeline("専門として[MASK]を専攻しています") [{'sequence': '専門として工学を専攻しています', 'score': 0.03630176931619644, 'token': 3988, 'token_str': '工学'}, {'sequence': '専門として政治学を専攻しています', 'score': 0.03547220677137375, 'token': 22307, 'token_str': '政治学'}, {'sequence': '専門として教育を専攻しています', 'score': 0.03162326663732529, 'token': 414, 'token_str': '教育'}, {'sequence': '専門として経済学を専攻しています', 'score': 0.026036914438009262, 'token': 6814, 'token_str': '経済学'}, {'sequence': '専門として法学を専攻しています', 'score': 0.02561848610639572, 'token': 10810, 'token_str': '法学'}] ``` Note: specifying a `revision` option is recommended to keep reproducibility when downloading a model via `transformers.pipeline` or `transformers.AutoModel.from_pretrained` . ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the following data as training data * **Name:** ウィキペディア (Wikipedia): フリー百科事典 * **Credit:** https://ja.wikipedia.org/ * **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) * **Link:** https://ja.wikipedia.org/
{"language": "ja", "license": "cc-by-sa-4.0", "datasets": "wikipedia", "pipeline_tag": "fill-mask", "widget": [{"text": "\u5f97\u610f\u306a\u79d1\u76ee\u306f[MASK]\u3067\u3059\u3002"}]}
colorfulscoop/bert-base-ja
null
[ "transformers", "pytorch", "tf", "bert", "pretraining", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #tf #bert #pretraining #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #endpoints_compatible #region-us
# BERT base Japanese model This repository contains a BERT base model trained on Japanese Wikipedia dataset. ## Training data Japanese Wikipedia dataset as of June 20, 2021 which is released under Creative Commons Attribution-ShareAlike 3.0 is used for training. The dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split. ## Model description The model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size. The vocabulary size is set to 32,000 instead of an original size of 30,522. For the model, 'transformers.BertForPreTraining' is used. ## Tokenizer description SentencePiece tokenizer is used as a tokenizer for this model. While training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split. The vocabulary size is set to 32,000. A 'add_dummy_prefix' option is set to 'True' because words are not separated by whitespaces in Japanese. After training, the model is imported to 'transformers.DebertaV2Tokenizer' because it supports SentencePiece models and its behavior is consistent when 'use_fast' option is set to 'True' or 'False'. Note: The meaning of "consistent" here is as follows. For example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast. Although 'use_fast=False' option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to URL or model card. Therefore unexpected behavior happens when using Inference API. To avoid this kind of problems, 'transformers.DebertaV2Tokenizer' is used in this model. ## Training Training details are as follows. * gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32) * gradient clip norm is 1.0 * Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps * The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. The training continued until validation loss got worse. Totally the number of training steps were around 214k. The test set loss was 2.80 . Training code is available in a GitHub repository. ## Usage First, install dependecies. Then use 'transformers.pipeline' to try mask fill task. Note: specifying a 'revision' option is recommended to keep reproducibility when downloading a model via 'transformers.pipeline' or 'transformers.AutoModel.from_pretrained' . ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 3.0. Disclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the following data as training data * Name: ウィキペディア (Wikipedia): フリー百科事典 * Credit: URL * License: Creative Commons Attribution-ShareAlike 3.0 * Link: URL
[ "# BERT base Japanese model\n\nThis repository contains a BERT base model trained on Japanese Wikipedia dataset.", "## Training data\n\nJapanese Wikipedia dataset as of June 20, 2021 which is released under Creative Commons Attribution-ShareAlike 3.0 is used for training.\nThe dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split.", "## Model description\n\nThe model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size.\nThe vocabulary size is set to 32,000 instead of an original size of 30,522.\n\nFor the model, 'transformers.BertForPreTraining' is used.", "## Tokenizer description\n\nSentencePiece tokenizer is used as a tokenizer for this model.\n\nWhile training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split.\nThe vocabulary size is set to 32,000. A 'add_dummy_prefix' option is set to 'True' because words are not separated by whitespaces in Japanese.\n\nAfter training, the model is imported to 'transformers.DebertaV2Tokenizer' because it supports SentencePiece models and its behavior is consistent when 'use_fast' option is set to 'True' or 'False'.\n\nNote:\nThe meaning of \"consistent\" here is as follows.\nFor example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast.\nAlthough 'use_fast=False' option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to URL or model card.\nTherefore unexpected behavior happens when using Inference API. To avoid this kind of problems, 'transformers.DebertaV2Tokenizer' is used in this model.", "## Training\n\nTraining details are as follows.\n\n* gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32)\n* gradient clip norm is 1.0\n* Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps\n* The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps.\n\nTrainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.\n\nThe training continued until validation loss got worse. Totally the number of training steps were around 214k.\nThe test set loss was 2.80 .\n\nTraining code is available in a GitHub repository.", "## Usage\n\nFirst, install dependecies.\n\n\n\nThen use 'transformers.pipeline' to try mask fill task.\n\n\n\nNote: specifying a 'revision' option is recommended to keep reproducibility when downloading a model via 'transformers.pipeline' or 'transformers.AutoModel.from_pretrained' .", "## License\n\nCopyright (c) 2021 Colorful Scoop\n\nAll the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 3.0.\n\nDisclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n\n---\n\nThis model utilizes the following data as training data\n\n* Name: ウィキペディア (Wikipedia): フリー百科事典\n* Credit: URL\n* License: Creative Commons Attribution-ShareAlike 3.0\n* Link: URL" ]
[ "TAGS\n#transformers #pytorch #tf #bert #pretraining #fill-mask #ja #dataset-wikipedia #license-cc-by-sa-4.0 #endpoints_compatible #region-us \n", "# BERT base Japanese model\n\nThis repository contains a BERT base model trained on Japanese Wikipedia dataset.", "## Training data\n\nJapanese Wikipedia dataset as of June 20, 2021 which is released under Creative Commons Attribution-ShareAlike 3.0 is used for training.\nThe dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split.", "## Model description\n\nThe model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size.\nThe vocabulary size is set to 32,000 instead of an original size of 30,522.\n\nFor the model, 'transformers.BertForPreTraining' is used.", "## Tokenizer description\n\nSentencePiece tokenizer is used as a tokenizer for this model.\n\nWhile training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split.\nThe vocabulary size is set to 32,000. A 'add_dummy_prefix' option is set to 'True' because words are not separated by whitespaces in Japanese.\n\nAfter training, the model is imported to 'transformers.DebertaV2Tokenizer' because it supports SentencePiece models and its behavior is consistent when 'use_fast' option is set to 'True' or 'False'.\n\nNote:\nThe meaning of \"consistent\" here is as follows.\nFor example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast.\nAlthough 'use_fast=False' option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to URL or model card.\nTherefore unexpected behavior happens when using Inference API. To avoid this kind of problems, 'transformers.DebertaV2Tokenizer' is used in this model.", "## Training\n\nTraining details are as follows.\n\n* gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32)\n* gradient clip norm is 1.0\n* Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps\n* The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps.\n\nTrainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.\n\nThe training continued until validation loss got worse. Totally the number of training steps were around 214k.\nThe test set loss was 2.80 .\n\nTraining code is available in a GitHub repository.", "## Usage\n\nFirst, install dependecies.\n\n\n\nThen use 'transformers.pipeline' to try mask fill task.\n\n\n\nNote: specifying a 'revision' option is recommended to keep reproducibility when downloading a model via 'transformers.pipeline' or 'transformers.AutoModel.from_pretrained' .", "## License\n\nCopyright (c) 2021 Colorful Scoop\n\nAll the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 3.0.\n\nDisclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n\n---\n\nThis model utilizes the following data as training data\n\n* Name: ウィキペディア (Wikipedia): フリー百科事典\n* Credit: URL\n* License: Creative Commons Attribution-ShareAlike 3.0\n* Link: URL" ]
text-generation
transformers
# GPT-2 small Japanese model This repository contains a GPT2-small model trained on Japanese Wikipedia dataset. ## Training data [Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of Aug20, 2021 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for both tokenizer and GPT-2 model. We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set. Train set contains around 540M tokens. ## Model description The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size. The vocabulary size is set to 32,000 instead of an original size of 50,257. `transformers.GPT2LMHeadModel` is used for training. ## Tokenizer description [SentencePiece](https://github.com/google/sentencepiece) is used as a tokenizer for this model. We utilized 1,000,000 sentences from train set. The vocabulary size was 32,000. A `add_dummy_prefix` option was set to `True` because Japanese words are not separated by whitespaces. After training, the tokenizer model was imported as `transformers.BERTGenerationTokenizer` because it supports SentencePiece models and it does not add any special tokens as default, which is useful expecially for a text generation task. ## Training The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens. We utilized Adam optimizer. Learning rate was linearly increased from `0` to `1e-4` during the first 10,000 steps. A clip norm was set to `1.0`. Test set perplexity of the trained model was 29.13. Please refer to [GitHub](https://github.com/colorfulscoop/gpt-ja) for more training details. ## Usage First, install dependecies. ```sh $ pip install transformers==4.10.0 torch==1.8.1 sentencepiece==0.1.96 ``` Then use pipeline to generate sentences. ```sh >>> import transformers >>> pipeline = transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja") >>> pipeline("統計的機械学習でのニューラルネットワーク", do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3) ``` **Note:** The default model configuration `config.json` sets parameters for text generation with `do_sample=True`, `top_k=50`, `top_p=0.95`. Please set these parameters when you need to use different parameters. ## Versions We recommend to specify `revision` to load the model for reproducibility. | Revision | Date of Wikipedia dump | | --- | --- | | 20210820.1.0 | Aug 20, 2021 | | 20210301.1.0 | March 1, 2021 | You can specify `revision` as follows. ```py # Example of pipeline >>> transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja", revision="20210820.1.0") # Example of AutoModel >>> transformers.AutoModel.from_pretrained("colorfulscoop/gpt2-small-ja", revision="20210820.1.0") ``` ## License All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. **Author:** Colorful Scoop
{"language": "ja", "license": "cc", "datasets": "wikipedia", "widget": [{"text": "\u7d71\u8a08\u7684\u6a5f\u68b0\u5b66\u7fd2\u3067\u306e\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af"}]}
colorfulscoop/gpt2-small-ja
null
[ "transformers", "pytorch", "tf", "gpt2", "text-generation", "ja", "dataset:wikipedia", "license:cc", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #tf #gpt2 #text-generation #ja #dataset-wikipedia #license-cc #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
GPT-2 small Japanese model ========================== This repository contains a GPT2-small model trained on Japanese Wikipedia dataset. Training data ------------- Japanese Wikipedia dataset as of Aug20, 2021 released under Creative Commons Attribution-ShareAlike 3.0 is used for both tokenizer and GPT-2 model. We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set. Train set contains around 540M tokens. Model description ----------------- The model architecture is the same as GPT-2 small model (n\_ctx: 1024, n\_embd 768, n\_head: 12, n\_layer: 12) except for a vocabulary size. The vocabulary size is set to 32,000 instead of an original size of 50,257. 'transformers.GPT2LMHeadModel' is used for training. Tokenizer description --------------------- SentencePiece is used as a tokenizer for this model. We utilized 1,000,000 sentences from train set. The vocabulary size was 32,000. A 'add\_dummy\_prefix' option was set to 'True' because Japanese words are not separated by whitespaces. After training, the tokenizer model was imported as 'transformers.BERTGenerationTokenizer' because it supports SentencePiece models and it does not add any special tokens as default, which is useful expecially for a text generation task. Training -------- The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens. We utilized Adam optimizer. Learning rate was linearly increased from '0' to '1e-4' during the first 10,000 steps. A clip norm was set to '1.0'. Test set perplexity of the trained model was 29.13. Please refer to GitHub for more training details. Usage ----- First, install dependecies. Then use pipeline to generate sentences. Note: The default model configuration 'URL' sets parameters for text generation with 'do\_sample=True', 'top\_k=50', 'top\_p=0.95'. Please set these parameters when you need to use different parameters. Versions -------- We recommend to specify 'revision' to load the model for reproducibility. You can specify 'revision' as follows. License ------- All the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 3.0. Disclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. Author: Colorful Scoop
[]
[ "TAGS\n#transformers #pytorch #tf #gpt2 #text-generation #ja #dataset-wikipedia #license-cc #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
sentence-similarity
sentence-transformers
# Sentence BERT base Japanese model This repository contains a Sentence BERT base model for Japanese. ## Pretrained model This model utilizes a Japanese BERT model [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) v1.0 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) as a pretrained model. ## Training data [Japanese SNLI dataset](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) released under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/) is used for training. Original training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared. * Train data: 523,005 samples * Valid data: 10,000 samples * Test data: 3,916 samples ## Model description This model utilizes `SentenceTransformer` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) . The model detail is as below. ```py >>> from sentence_transformers import SentenceTransformer >>> SentenceTransformer("colorfulscoop/sbert-base-ja") SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Training This model finetuned [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8. Note: in a original paper of [Sentence BERT](https://arxiv.org/abs/1908.10084), a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. After training, test set accuracy reached to 0.8529. Training code is available in [a GitHub repository](https://github.com/colorfulscoop/sbert-ja). ## Usage First, install dependecies. ```sh $ pip install sentence-transformers==2.0.0 ``` Then initialize `SentenceTransformer` model and use `encode` method to convert to vectors. ```py >>> from sentence_transformers import SentenceTransformer >>> model = SentenceTransformer("colorfulscoop/sbert-base-ja") >>> sentences = ["外をランニングするのが好きです", "海外旅行に行くのが趣味です"] >>> model.encode(sentences) ``` ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). **Disclaimer:** Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the folllowing pretrained model. * **Name:** bert-base-ja * **Credit:** (c) 2021 Colorful Scoop * **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) * **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. * **Link:** https://huggingface.co/colorfulscoop/bert-base-ja --- This model utilizes the following data for fine-tuning. * **Name:** 日本語SNLI(JSNLI)データセット * **Credit:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) * **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) * **Link:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
{"language": "ja", "license": "cc-by-sa-4.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity", "widget": {"source_sentence": "\u8d70\u308b\u306e\u304c\u8da3\u5473\u3067\u3059", "sentences": ["\u5916\u3092\u30e9\u30f3\u30cb\u30f3\u30b0\u3059\u308b\u306e\u304c\u597d\u304d\u3067\u3059", "\u904b\u52d5\u306f\u305d\u3053\u305d\u3053\u3067\u3059", "\u8d70\u308b\u306e\u306f\u5acc\u3044\u3067\u3059"]}}
colorfulscoop/sbert-base-ja
null
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "ja", "arxiv:1908.10084", "license:cc-by-sa-4.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.10084" ]
[ "ja" ]
TAGS #sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #ja #arxiv-1908.10084 #license-cc-by-sa-4.0 #endpoints_compatible #has_space #region-us
# Sentence BERT base Japanese model This repository contains a Sentence BERT base model for Japanese. ## Pretrained model This model utilizes a Japanese BERT model colorfulscoop/bert-base-ja v1.0 released under Creative Commons Attribution-ShareAlike 3.0 as a pretrained model. ## Training data Japanese SNLI dataset released under Creative Commons Attribution-ShareAlike 4.0 is used for training. Original training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared. * Train data: 523,005 samples * Valid data: 10,000 samples * Test data: 3,916 samples ## Model description This model utilizes 'SentenceTransformer' model from the sentence-transformers . The model detail is as below. ## Training This model finetuned colorfulscoop/bert-base-ja with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8. Note: in a original paper of Sentence BERT, a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. After training, test set accuracy reached to 0.8529. Training code is available in a GitHub repository. ## Usage First, install dependecies. Then initialize 'SentenceTransformer' model and use 'encode' method to convert to vectors. ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 4.0. Disclaimer: Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the folllowing pretrained model. * Name: bert-base-ja * Credit: (c) 2021 Colorful Scoop * License: Creative Commons Attribution-ShareAlike 3.0 * Disclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. * Link: URL --- This model utilizes the following data for fine-tuning. * Name: 日本語SNLI(JSNLI)データセット * Credit: URL.i.URL?日本語SNLI(JSNLI)データセット * License: CC BY-SA 4.0 * Link: URL.i.URL?日本語SNLI(JSNLI)データセット
[ "# Sentence BERT base Japanese model\n\nThis repository contains a Sentence BERT base model for Japanese.", "## Pretrained model\n\nThis model utilizes a Japanese BERT model colorfulscoop/bert-base-ja v1.0 released under Creative Commons Attribution-ShareAlike 3.0 as a pretrained model.", "## Training data\n\nJapanese SNLI dataset released under Creative Commons Attribution-ShareAlike 4.0 is used for training.\n\nOriginal training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared.\n\n* Train data: 523,005 samples\n* Valid data: 10,000 samples\n* Test data: 3,916 samples", "## Model description\n\nThis model utilizes 'SentenceTransformer' model from the sentence-transformers .\nThe model detail is as below.", "## Training\n\nThis model finetuned colorfulscoop/bert-base-ja with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8.\n\nNote: in a original paper of Sentence BERT, a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16.\n\nTrainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.\n\nAfter training, test set accuracy reached to 0.8529.\n\nTraining code is available in a GitHub repository.", "## Usage\n\nFirst, install dependecies.\n\n\n\nThen initialize 'SentenceTransformer' model and use 'encode' method to convert to vectors.", "## License\n\nCopyright (c) 2021 Colorful Scoop\n\nAll the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 4.0.\n\nDisclaimer: Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n\n---\n\nThis model utilizes the folllowing pretrained model.\n\n* Name: bert-base-ja\n* Credit: (c) 2021 Colorful Scoop\n* License: Creative Commons Attribution-ShareAlike 3.0\n* Disclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n* Link: URL\n\n---\n\nThis model utilizes the following data for fine-tuning.\n\n* Name: 日本語SNLI(JSNLI)データセット\n* Credit: URL.i.URL?日本語SNLI(JSNLI)データセット\n* License: CC BY-SA 4.0\n* Link: URL.i.URL?日本語SNLI(JSNLI)データセット" ]
[ "TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #ja #arxiv-1908.10084 #license-cc-by-sa-4.0 #endpoints_compatible #has_space #region-us \n", "# Sentence BERT base Japanese model\n\nThis repository contains a Sentence BERT base model for Japanese.", "## Pretrained model\n\nThis model utilizes a Japanese BERT model colorfulscoop/bert-base-ja v1.0 released under Creative Commons Attribution-ShareAlike 3.0 as a pretrained model.", "## Training data\n\nJapanese SNLI dataset released under Creative Commons Attribution-ShareAlike 4.0 is used for training.\n\nOriginal training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared.\n\n* Train data: 523,005 samples\n* Valid data: 10,000 samples\n* Test data: 3,916 samples", "## Model description\n\nThis model utilizes 'SentenceTransformer' model from the sentence-transformers .\nThe model detail is as below.", "## Training\n\nThis model finetuned colorfulscoop/bert-base-ja with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8.\n\nNote: in a original paper of Sentence BERT, a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16.\n\nTrainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.\n\nAfter training, test set accuracy reached to 0.8529.\n\nTraining code is available in a GitHub repository.", "## Usage\n\nFirst, install dependecies.\n\n\n\nThen initialize 'SentenceTransformer' model and use 'encode' method to convert to vectors.", "## License\n\nCopyright (c) 2021 Colorful Scoop\n\nAll the models included in this repository are licensed under Creative Commons Attribution-ShareAlike 4.0.\n\nDisclaimer: Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n\n---\n\nThis model utilizes the folllowing pretrained model.\n\n* Name: bert-base-ja\n* Credit: (c) 2021 Colorful Scoop\n* License: Creative Commons Attribution-ShareAlike 3.0\n* Disclaimer: The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.\n* Link: URL\n\n---\n\nThis model utilizes the following data for fine-tuning.\n\n* Name: 日本語SNLI(JSNLI)データセット\n* Credit: URL.i.URL?日本語SNLI(JSNLI)データセット\n* License: CC BY-SA 4.0\n* Link: URL.i.URL?日本語SNLI(JSNLI)データセット" ]
automatic-speech-recognition
transformers
# Czech wav2vec2-xls-r-300m-cs-250 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset as well as other datasets listed below. It achieves the following results on the evaluation set: - Loss: 0.1271 - Wer: 0.1475 - Cer: 0.0329 The `eval.py` script results using a LM are: - WER: 0.07274312090176113 - CER: 0.021207369275558875 ## Model description Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-250 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs ``` ## Training and evaluation data The Common Voice 8.0 `train` and `validation` datasets were used for training, as well as the following datasets: - Šmídl, Luboš and Pražák, Aleš, 2013, OVM – Otázky Václava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-000D-EC98-3. - Pražák, Aleš and Šmídl, Luboš, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-0005-CF9C-4. - Plátek, Ondřej; Dušek, Ondřej and Jurčíček, Filip, 2016, Vystadial 2016 – Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11234/1-1740. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 3.4203 | 0.16 | 800 | 3.3148 | 1.0 | 1.0 | | 2.8151 | 0.32 | 1600 | 0.8508 | 0.8938 | 0.2345 | | 0.9411 | 0.48 | 2400 | 0.3335 | 0.3723 | 0.0847 | | 0.7408 | 0.64 | 3200 | 0.2573 | 0.2840 | 0.0642 | | 0.6516 | 0.8 | 4000 | 0.2365 | 0.2581 | 0.0595 | | 0.6242 | 0.96 | 4800 | 0.2039 | 0.2433 | 0.0541 | | 0.5754 | 1.12 | 5600 | 0.1832 | 0.2156 | 0.0482 | | 0.5626 | 1.28 | 6400 | 0.1827 | 0.2091 | 0.0463 | | 0.5342 | 1.44 | 7200 | 0.1744 | 0.2033 | 0.0468 | | 0.4965 | 1.6 | 8000 | 0.1705 | 0.1963 | 0.0444 | | 0.5047 | 1.76 | 8800 | 0.1604 | 0.1889 | 0.0422 | | 0.4814 | 1.92 | 9600 | 0.1604 | 0.1827 | 0.0411 | | 0.4471 | 2.09 | 10400 | 0.1566 | 0.1822 | 0.0406 | | 0.4509 | 2.25 | 11200 | 0.1619 | 0.1853 | 0.0432 | | 0.4415 | 2.41 | 12000 | 0.1513 | 0.1764 | 0.0397 | | 0.4313 | 2.57 | 12800 | 0.1515 | 0.1739 | 0.0392 | | 0.4163 | 2.73 | 13600 | 0.1445 | 0.1695 | 0.0377 | | 0.4142 | 2.89 | 14400 | 0.1478 | 0.1699 | 0.0385 | | 0.4184 | 3.05 | 15200 | 0.1430 | 0.1669 | 0.0376 | | 0.3886 | 3.21 | 16000 | 0.1433 | 0.1644 | 0.0374 | | 0.3795 | 3.37 | 16800 | 0.1426 | 0.1648 | 0.0373 | | 0.3859 | 3.53 | 17600 | 0.1357 | 0.1604 | 0.0361 | | 0.3762 | 3.69 | 18400 | 0.1344 | 0.1558 | 0.0349 | | 0.384 | 3.85 | 19200 | 0.1379 | 0.1576 | 0.0359 | | 0.3762 | 4.01 | 20000 | 0.1344 | 0.1539 | 0.0346 | | 0.3559 | 4.17 | 20800 | 0.1339 | 0.1525 | 0.0351 | | 0.3683 | 4.33 | 21600 | 0.1315 | 0.1518 | 0.0342 | | 0.3572 | 4.49 | 22400 | 0.1307 | 0.1507 | 0.0342 | | 0.3494 | 4.65 | 23200 | 0.1294 | 0.1491 | 0.0335 | | 0.3476 | 4.81 | 24000 | 0.1287 | 0.1491 | 0.0336 | | 0.3475 | 4.97 | 24800 | 0.1271 | 0.1475 | 0.0329 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["mozilla-foundation/common_voice_8_0", "ovm", "pscr", "vystadial2016"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M 250h data", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 7.3, "name": "Test WER"}, {"type": "cer", "value": 2.1, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 43.44, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 38.5, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-cs-250
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "cs", "dataset:mozilla-foundation/common_voice_8_0", "dataset:ovm", "dataset:pscr", "dataset:vystadial2016", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cs" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #cs #dataset-mozilla-foundation/common_voice_8_0 #dataset-ovm #dataset-pscr #dataset-vystadial2016 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
Czech wav2vec2-xls-r-300m-cs-250 ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice 8.0 dataset as well as other datasets listed below. It achieves the following results on the evaluation set: * Loss: 0.1271 * Wer: 0.1475 * Cer: 0.0329 The 'URL' script results using a LM are: * WER: 0.07274312090176113 * CER: 0.021207369275558875 Model description ----------------- Fine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: Evaluation ---------- The model can be evaluated using the attached 'URL' script: Training and evaluation data ---------------------------- The Common Voice 8.0 'train' and 'validation' datasets were used for training, as well as the following datasets: * Šmídl, Luboš and Pražák, Aleš, 2013, OVM – Otázky Václava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, URL * Pražák, Aleš and Šmídl, Luboš, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, URL * Plátek, Ondřej; Dušek, Ondřej and Jurčíček, Filip, 2016, Vystadial 2016 – Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, URL ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 800 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #cs #dataset-mozilla-foundation/common_voice_8_0 #dataset-ovm #dataset-pscr #dataset-vystadial2016 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 800\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-cs-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset. It achieves the following results on the evaluation set while training: - Loss: 0.2327 - Wer: 0.1608 - Cer: 0.0376 The `eval.py` script results using a LM are: WER: 0.10281503199350225 CER: 0.02622802241689026 ## Model description Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs ``` ## Training and evaluation data The Common Voice 8.0 `train` and `validation` datasets were used for training ## Training procedure ### Training hyperparameters The following hyperparameters were used during first stage of training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 150 - mixed_precision_training: Native AMP The following hyperparameters were used during second stage of training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 7.2926 | 8.06 | 250 | 3.8497 | 1.0 | 1.0 | | 3.417 | 16.13 | 500 | 3.2852 | 1.0 | 0.9857 | | 2.0264 | 24.19 | 750 | 0.7099 | 0.7342 | 0.1768 | | 0.4018 | 32.25 | 1000 | 0.6188 | 0.6415 | 0.1551 | | 0.2444 | 40.32 | 1250 | 0.6632 | 0.6362 | 0.1600 | | 0.1882 | 48.38 | 1500 | 0.6070 | 0.5783 | 0.1388 | | 0.153 | 56.44 | 1750 | 0.6425 | 0.5720 | 0.1377 | | 0.1214 | 64.51 | 2000 | 0.6363 | 0.5546 | 0.1337 | | 0.1011 | 72.57 | 2250 | 0.6310 | 0.5222 | 0.1224 | | 0.0879 | 80.63 | 2500 | 0.6353 | 0.5258 | 0.1253 | | 0.0782 | 88.7 | 2750 | 0.6078 | 0.4904 | 0.1127 | | 0.0709 | 96.76 | 3000 | 0.6465 | 0.4960 | 0.1154 | | 0.0661 | 104.82 | 3250 | 0.6622 | 0.4945 | 0.1166 | | 0.0616 | 112.89 | 3500 | 0.6440 | 0.4786 | 0.1104 | | 0.0579 | 120.95 | 3750 | 0.6815 | 0.4887 | 0.1144 | | 0.0549 | 129.03 | 4000 | 0.6603 | 0.4780 | 0.1105 | | 0.0527 | 137.09 | 4250 | 0.6652 | 0.4749 | 0.1090 | | 0.0506 | 145.16 | 4500 | 0.6958 | 0.4846 | 0.1133 | Further fine-tuning with slightly different architecture and higher learning rate: | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.576 | 8.06 | 250 | 0.2411 | 0.2340 | 0.0502 | | 0.2564 | 16.13 | 500 | 0.2305 | 0.2097 | 0.0492 | | 0.2018 | 24.19 | 750 | 0.2371 | 0.2059 | 0.0494 | | 0.1549 | 32.25 | 1000 | 0.2298 | 0.1844 | 0.0435 | | 0.1224 | 40.32 | 1250 | 0.2288 | 0.1725 | 0.0407 | | 0.1004 | 48.38 | 1500 | 0.2327 | 0.1608 | 0.0376 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 10.3, "name": "Test WER"}, {"type": "cer", "value": 2.6, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 54.29, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 44.55, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-cs-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "cs", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cs" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #cs #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-xls-r-300m-cs-cv8 ========================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice 8.0 dataset. It achieves the following results on the evaluation set while training: * Loss: 0.2327 * Wer: 0.1608 * Cer: 0.0376 The 'URL' script results using a LM are: WER: 0.10281503199350225 CER: 0.02622802241689026 Model description ----------------- Fine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: Evaluation ---------- The model can be evaluated using the attached 'URL' script: Training and evaluation data ---------------------------- The Common Voice 8.0 'train' and 'validation' datasets were used for training Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during first stage of training: * learning\_rate: 7e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 20 * total\_train\_batch\_size: 640 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 150 * mixed\_precision\_training: Native AMP The following hyperparameters were used during second stage of training: * learning\_rate: 0.001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 20 * total\_train\_batch\_size: 640 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 50 * mixed\_precision\_training: Native AMP ### Training results Further fine-tuning with slightly different architecture and higher learning rate: ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during first stage of training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 640\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 150\n* mixed\\_precision\\_training: Native AMP\n\n\nThe following hyperparameters were used during second stage of training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 640\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nFurther fine-tuning with slightly different architecture and higher learning rate:", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #cs #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during first stage of training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 640\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 150\n* mixed\\_precision\\_training: Native AMP\n\n\nThe following hyperparameters were used during second stage of training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 20\n* total\\_train\\_batch\\_size: 640\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results\n\n\n\nFurther fine-tuning with slightly different architecture and higher learning rate:", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Czech Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice 6.1 ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "cs", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 22.20 % ## Training The Common Voice `train` and `validation` datasets were used for training # TODO The script used for training can be found [here](...)
{"language": ["cs"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "Czech comodoro Wav2Vec2 XLSR 300M CV6.1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 6.1", "type": "common_voice", "args": "cs"}, "metrics": [{"type": "wer", "value": 22.2, "name": "Test WER"}, {"type": "cer", "value": 5.1, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 66.78, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 57.52, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-cs
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "xlsr-fine-tuning-week", "cs", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cs" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #xlsr-fine-tuning-week #cs #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Czech Fine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice 6.1 Test Result: 22.20 % ## Training The Common Voice 'train' and 'validation' datasets were used for training # TODO The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Czech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice 6.1 \n\n\n\n\nTest Result: 22.20 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training", "# TODO The script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #xlsr-fine-tuning-week #cs #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Czech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice 6.1 \n\n\n\n\nTest Result: 22.20 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training", "# TODO The script used for training can be found here" ]
automatic-speech-recognition
transformers
# Upper Sorbian wav2vec2-xls-r-300m-hsb-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9643 - Wer: 0.5037 - Cer: 0.1278 ## Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-hsb-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config hsb ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 4.3121 | 19.35 | 1200 | 3.2059 | 1.0 | 1.0 | | 2.6525 | 38.71 | 2400 | 1.1324 | 0.9387 | 0.3204 | | 1.3644 | 58.06 | 3600 | 0.8767 | 0.8099 | 0.2271 | | 1.093 | 77.42 | 4800 | 0.8739 | 0.7603 | 0.2090 | | 0.9546 | 96.77 | 6000 | 0.8454 | 0.6983 | 0.1882 | | 0.8554 | 116.13 | 7200 | 0.8197 | 0.6484 | 0.1708 | | 0.775 | 135.48 | 8400 | 0.8452 | 0.6345 | 0.1681 | | 0.7167 | 154.84 | 9600 | 0.8551 | 0.6241 | 0.1631 | | 0.6609 | 174.19 | 10800 | 0.8442 | 0.5821 | 0.1531 | | 0.616 | 193.55 | 12000 | 0.8892 | 0.5864 | 0.1527 | | 0.5815 | 212.9 | 13200 | 0.8839 | 0.5772 | 0.1503 | | 0.55 | 232.26 | 14400 | 0.8905 | 0.5665 | 0.1436 | | 0.5173 | 251.61 | 15600 | 0.8995 | 0.5471 | 0.1417 | | 0.4969 | 270.97 | 16800 | 0.8633 | 0.5325 | 0.1334 | | 0.4803 | 290.32 | 18000 | 0.9074 | 0.5253 | 0.1352 | | 0.4596 | 309.68 | 19200 | 0.9159 | 0.5146 | 0.1294 | | 0.4415 | 329.03 | 20400 | 0.9055 | 0.5189 | 0.1314 | | 0.434 | 348.39 | 21600 | 0.9435 | 0.5208 | 0.1314 | | 0.4199 | 367.74 | 22800 | 0.9199 | 0.5136 | 0.1290 | | 0.4008 | 387.1 | 24000 | 0.9342 | 0.5174 | 0.1303 | | 0.4051 | 406.45 | 25200 | 0.9436 | 0.5132 | 0.1292 | | 0.3861 | 425.81 | 26400 | 0.9417 | 0.5084 | 0.1283 | | 0.3738 | 445.16 | 27600 | 0.9573 | 0.5079 | 0.1299 | | 0.3768 | 464.52 | 28800 | 0.9682 | 0.5062 | 0.1289 | | 0.3647 | 483.87 | 30000 | 0.9643 | 0.5037 | 0.1278 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["hsb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Upper Sorbian comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hsb"}, "metrics": [{"type": "wer", "value": 56.3, "name": "Test WER"}, {"type": "cer", "value": 14.3, "name": "Test CER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-hsb-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "hsb", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hsb" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
Upper Sorbian wav2vec2-xls-r-300m-hsb-cv8 ========================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.9643 * Wer: 0.5037 * Cer: 0.1278 Evaluation ---------- The model can be evaluated using the attached 'URL' script: ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * num\_epochs: 500 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-pl-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset. It achieves the following results on the evaluation set while training: - Loss: 0.1716 - Wer: 0.1697 - Cer: 0.0385 The `eval.py` script results are: WER: 0.16970531733661967 CER: 0.03839135416519316 ## Model description Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "pl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-pl-cv8") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-pl-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config pl ``` ## Training and evaluation data The Common Voice 8.0 `train` and `validation` datasets were used for training ## Training procedure ### Training hyperparameters The following hyperparameters were used: - learning_rate: 1e-4 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 1 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 150 - mixed_precision_training: Native AMP The training was interrupted after 3250 steps. ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["pl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Polish comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pl"}, "metrics": [{"type": "wer", "value": 17.0, "name": "Test WER"}, {"type": "cer", "value": 3.8, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 38.97, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 46.05, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-pl-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "pl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "pl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #pl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-xls-r-300m-pl-cv8 This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset. It achieves the following results on the evaluation set while training: - Loss: 0.1716 - Wer: 0.1697 - Cer: 0.0385 The 'URL' script results are: WER: 0.16970531733661967 CER: 0.03839135416519316 ## Model description Fine-tuned facebook/wav2vec2-large-xlsr-53 on Polish using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated using the attached 'URL' script: ## Training and evaluation data The Common Voice 8.0 'train' and 'validation' datasets were used for training ## Training procedure ### Training hyperparameters The following hyperparameters were used: - learning_rate: 1e-4 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 1 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 150 - mixed_precision_training: Native AMP The training was interrupted after 3250 steps. ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
[ "# wav2vec2-xls-r-300m-pl-cv8\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset.\nIt achieves the following results on the evaluation set while training:\n- Loss: 0.1716\n- Wer: 0.1697\n- Cer: 0.0385\n\nThe 'URL' script results are:\nWER: 0.16970531733661967\nCER: 0.03839135416519316", "## Model description\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Polish using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated using the attached 'URL' script:", "## Training and evaluation data\n\nThe Common Voice 8.0 'train' and 'validation' datasets were used for training", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used:\n\n- learning_rate: 1e-4\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 1\n- total_train_batch_size: 640\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 150\n- mixed_precision_training: Native AMP\n\nThe training was interrupted after 3250 steps.", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #pl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-300m-pl-cv8\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset.\nIt achieves the following results on the evaluation set while training:\n- Loss: 0.1716\n- Wer: 0.1697\n- Cer: 0.0385\n\nThe 'URL' script results are:\nWER: 0.16970531733661967\nCER: 0.03839135416519316", "## Model description\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Polish using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated using the attached 'URL' script:", "## Training and evaluation data\n\nThe Common Voice 8.0 'train' and 'validation' datasets were used for training", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used:\n\n- learning_rate: 1e-4\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 1\n- total_train_batch_size: 640\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 150\n- mixed_precision_training: Native AMP\n\nThe training was interrupted after 3250 steps.", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-cs-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset. It achieves the following results on the evaluation set: - WER: 0.49575384615384616 - CER: 0.13333333333333333 ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sk-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config sk ``` ## Training and evaluation data The Common Voice 8.0 `train` and `validation` datasets were used for training ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-4 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["sk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Slovak comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sk"}, "metrics": [{"type": "wer", "value": 49.6, "name": "Test WER"}, {"type": "cer", "value": 13.3, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 81.7, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 80.26, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-sk-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "sk", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #sk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-xls-r-300m-cs-cv8 This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset. It achieves the following results on the evaluation set: - WER: 0.49575384615384616 - CER: 0.13333333333333333 ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated using the attached 'URL' script: ## Training and evaluation data The Common Voice 8.0 'train' and 'validation' datasets were used for training ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-4 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
[ "# wav2vec2-xls-r-300m-cs-cv8\r\n\r\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset.\r\nIt achieves the following results on the evaluation set:\r\n\r\n- WER: 0.49575384615384616\r\n- CER: 0.13333333333333333", "## Usage\r\n\r\nThe model can be used directly (without a language model) as follows:", "## Evaluation\r\n\r\nThe model can be evaluated using the attached 'URL' script:", "## Training and evaluation data\r\n\r\nThe Common Voice 8.0 'train' and 'validation' datasets were used for training", "### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n\r\n- learning_rate: 7e-4\r\n- train_batch_size: 32\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 20\r\n- total_train_batch_size: 640\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 500\r\n- num_epochs: 50\r\n- mixed_precision_training: Native AMP", "### Framework versions\r\n\r\n- Transformers 4.16.0.dev0\r\n- Pytorch 1.10.1+cu102\r\n- Datasets 1.17.1.dev0\r\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #sk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-300m-cs-cv8\r\n\r\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice 8.0 dataset.\r\nIt achieves the following results on the evaluation set:\r\n\r\n- WER: 0.49575384615384616\r\n- CER: 0.13333333333333333", "## Usage\r\n\r\nThe model can be used directly (without a language model) as follows:", "## Evaluation\r\n\r\nThe model can be evaluated using the attached 'URL' script:", "## Training and evaluation data\r\n\r\nThe Common Voice 8.0 'train' and 'validation' datasets were used for training", "### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n\r\n- learning_rate: 7e-4\r\n- train_batch_size: 32\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 20\r\n- total_train_batch_size: 640\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 500\r\n- num_epochs: 50\r\n- mixed_precision_training: Native AMP", "### Framework versions\r\n\r\n- Transformers 4.16.0.dev0\r\n- Pytorch 1.10.1+cu102\r\n- Datasets 1.17.1.dev0\r\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# Serbian wav2vec2-xls-r-300m-sr-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7302 - Wer: 0.4825 - Cer: 0.1847 Evaluation on mozilla-foundation/common_voice_8_0 gave the following results: - WER: 0.48530097993467103 - CER: 0.18413288165227845 Evaluation on speech-recognition-community-v2/dev_data gave the following results: - WER: 0.9718373107518604 - CER: 0.8302740620263108 The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sr-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config sr ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 800 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 5.6536 | 15.0 | 1200 | 2.9744 | 1.0 | 1.0 | | 2.7935 | 30.0 | 2400 | 1.6613 | 0.8998 | 0.4670 | | 1.6538 | 45.0 | 3600 | 0.9248 | 0.6918 | 0.2699 | | 1.2446 | 60.0 | 4800 | 0.9151 | 0.6452 | 0.2398 | | 1.0766 | 75.0 | 6000 | 0.9110 | 0.5995 | 0.2207 | | 0.9548 | 90.0 | 7200 | 1.0273 | 0.5921 | 0.2149 | | 0.8919 | 105.0 | 8400 | 0.9929 | 0.5646 | 0.2117 | | 0.8185 | 120.0 | 9600 | 1.0850 | 0.5483 | 0.2069 | | 0.7692 | 135.0 | 10800 | 1.1001 | 0.5394 | 0.2055 | | 0.7249 | 150.0 | 12000 | 1.1018 | 0.5380 | 0.1958 | | 0.6786 | 165.0 | 13200 | 1.1344 | 0.5114 | 0.1941 | | 0.6432 | 180.0 | 14400 | 1.1516 | 0.5054 | 0.1905 | | 0.6009 | 195.0 | 15600 | 1.3149 | 0.5324 | 0.1991 | | 0.5773 | 210.0 | 16800 | 1.2468 | 0.5124 | 0.1903 | | 0.559 | 225.0 | 18000 | 1.2186 | 0.4956 | 0.1922 | | 0.5298 | 240.0 | 19200 | 1.4483 | 0.5333 | 0.2085 | | 0.5136 | 255.0 | 20400 | 1.2871 | 0.4802 | 0.1846 | | 0.4824 | 270.0 | 21600 | 1.2891 | 0.4974 | 0.1885 | | 0.4669 | 285.0 | 22800 | 1.3283 | 0.4942 | 0.1878 | | 0.4511 | 300.0 | 24000 | 1.4502 | 0.5002 | 0.1994 | | 0.4337 | 315.0 | 25200 | 1.4714 | 0.5035 | 0.1911 | | 0.4221 | 330.0 | 26400 | 1.4971 | 0.5124 | 0.1962 | | 0.3994 | 345.0 | 27600 | 1.4473 | 0.5007 | 0.1920 | | 0.3892 | 360.0 | 28800 | 1.3904 | 0.4937 | 0.1887 | | 0.373 | 375.0 | 30000 | 1.4971 | 0.4946 | 0.1902 | | 0.3657 | 390.0 | 31200 | 1.4208 | 0.4900 | 0.1821 | | 0.3559 | 405.0 | 32400 | 1.4648 | 0.4895 | 0.1835 | | 0.3476 | 420.0 | 33600 | 1.4848 | 0.4946 | 0.1829 | | 0.3276 | 435.0 | 34800 | 1.5597 | 0.4979 | 0.1873 | | 0.3193 | 450.0 | 36000 | 1.7329 | 0.5040 | 0.1980 | | 0.3078 | 465.0 | 37200 | 1.6379 | 0.4937 | 0.1882 | | 0.3058 | 480.0 | 38400 | 1.5878 | 0.4942 | 0.1921 | | 0.2987 | 495.0 | 39600 | 1.5590 | 0.4811 | 0.1846 | | 0.2931 | 510.0 | 40800 | 1.6001 | 0.4825 | 0.1849 | | 0.276 | 525.0 | 42000 | 1.7388 | 0.4942 | 0.1918 | | 0.2702 | 540.0 | 43200 | 1.7037 | 0.4839 | 0.1866 | | 0.2619 | 555.0 | 44400 | 1.6704 | 0.4755 | 0.1840 | | 0.262 | 570.0 | 45600 | 1.6042 | 0.4751 | 0.1865 | | 0.2528 | 585.0 | 46800 | 1.6402 | 0.4821 | 0.1865 | | 0.2442 | 600.0 | 48000 | 1.6693 | 0.4886 | 0.1862 | | 0.244 | 615.0 | 49200 | 1.6203 | 0.4765 | 0.1792 | | 0.2388 | 630.0 | 50400 | 1.6829 | 0.4830 | 0.1828 | | 0.2362 | 645.0 | 51600 | 1.8100 | 0.4928 | 0.1888 | | 0.2224 | 660.0 | 52800 | 1.7746 | 0.4932 | 0.1899 | | 0.2218 | 675.0 | 54000 | 1.7752 | 0.4946 | 0.1901 | | 0.2201 | 690.0 | 55200 | 1.6775 | 0.4788 | 0.1844 | | 0.2147 | 705.0 | 56400 | 1.7085 | 0.4844 | 0.1851 | | 0.2103 | 720.0 | 57600 | 1.7624 | 0.4848 | 0.1864 | | 0.2101 | 735.0 | 58800 | 1.7213 | 0.4783 | 0.1835 | | 0.1983 | 750.0 | 60000 | 1.7452 | 0.4848 | 0.1856 | | 0.2015 | 765.0 | 61200 | 1.7525 | 0.4872 | 0.1869 | | 0.1969 | 780.0 | 62400 | 1.7443 | 0.4844 | 0.1852 | | 0.2043 | 795.0 | 63600 | 1.7302 | 0.4825 | 0.1847 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["sr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0", {"name": "Serbian comodoro Wav2Vec2 XLSR 300M CV8", "results": [{"task": {"name": "Automatic Speech Recognition", "type": "automatic-speech-recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sr"}, "metrics": [{"name": "Test WER", "type": "wer", "value": 48.5}, {"name": "Test CER", "type": "cer", "value": 18.4}]}]}], "model-index": [{"name": "wav2vec2-xls-r-300m-sr-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "sr"}, "metrics": [{"type": "wer", "value": 48.53, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 97.43, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sr"}, "metrics": [{"type": "wer", "value": 96.69, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-sr-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "sr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #sr #license-apache-2.0 #model-index #endpoints_compatible #region-us
Serbian wav2vec2-xls-r-300m-sr-cv8 ================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 1.7302 * Wer: 0.4825 * Cer: 0.1847 Evaluation on mozilla-foundation/common\_voice\_8\_0 gave the following results: * WER: 0.48530097993467103 * CER: 0.18413288165227845 Evaluation on speech-recognition-community-v2/dev\_data gave the following results: * WER: 0.9718373107518604 * CER: 0.8302740620263108 The model can be evaluated using the attached 'URL' script: ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 300 * num\_epochs: 800 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 800\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #xlsr-fine-tuning-week #hf-asr-leaderboard #sr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 800\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
# wav2vec2-xls-r-300m-west-slavic-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled. Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set. ### Evaluation script ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang} ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["cs", "hsb", "pl", "sk", "sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-west-slavic-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "cs"}, "metrics": [{"type": "wer", "value": 53.5, "name": "Test WER"}, {"type": "cer", "value": 14.7, "name": "Test CER"}, {"type": "wer", "value": 81.7, "name": "Test WER"}, {"type": "cer", "value": 21.2, "name": "Test CER"}, {"type": "wer", "value": 60.2, "name": "Test WER"}, {"type": "cer", "value": 15.6, "name": "Test CER"}, {"type": "wer", "value": 69.6, "name": "Test WER"}, {"type": "cer", "value": 20.7, "name": "Test CER"}, {"type": "wer", "value": 73.2, "name": "Test WER"}, {"type": "cer", "value": 23.2, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 84.11, "name": "Test WER"}, {"type": "wer", "value": 65.3, "name": "Test WER"}, {"type": "wer", "value": 88.37, "name": "Test WER"}, {"type": "wer", "value": 87.69, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "cs"}, "metrics": [{"type": "wer", "value": 75.99, "name": "Test WER"}, {"type": "wer", "value": 72.0, "name": "Test WER"}, {"type": "wer", "value": 89.08, "name": "Test WER"}, {"type": "wer", "value": 87.89, "name": "Test WER"}]}]}]}
comodoro/wav2vec2-xls-r-300m-west-slavic-cv8
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "xlsr-fine-tuning-week", "cs", "hsb", "pl", "sk", "sl", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "cs", "hsb", "pl", "sk", "sl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #cs #hsb #pl #sk #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-xls-r-300m-west-slavic-cv8 This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled. Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set. ### Evaluation script ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# wav2vec2-xls-r-300m-west-slavic-cv8\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled.\n\nEvaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.", "### Evaluation script", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 50\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #xlsr-fine-tuning-week #cs #hsb #pl #sk #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-xls-r-300m-west-slavic-cv8\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled.\n\nEvaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.", "### Evaluation script", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 50\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-toxic This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5338 | 1.0 | 313 | 2.3127 | | 2.4482 | 2.0 | 626 | 2.2985 | | 2.4312 | 3.0 | 939 | 2.2411 | ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.0 - Datasets 1.18.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-toxic", "results": []}]}
conjuring92/distilroberta-base-finetuned-toxic
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilroberta-base-finetuned-toxic ================================== This model is a fine-tuned version of distilroberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.2768 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0 * Pytorch 1.10.0 * Datasets 1.18.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0\n* Datasets 1.18.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Snape DialoGPT Model
{"tags": ["conversational"]}
conniezyj/DialoGPT-small-snape
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Snape DialoGPT Model
[ "# Snape DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Snape DialoGPT Model" ]
token-classification
transformers
Named-entity recognition model trained on the I2B2 training data set for PHI.
{}
connorboyle/bert-ner-i2b2
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
Named-entity recognition model trained on the I2B2 training data set for PHI.
[]
[ "TAGS\n#transformers #pytorch #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
hello
{}
conversify/response-score
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
hello
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
# LIMIT-BERT Code and model for the *EMNLP 2020 Findings* paper: [LIMIT-BERT: Linguistic Informed Multi-task BERT](https://arxiv.org/abs/1910.14296)) ## Contents 1. [Requirements](#Requirements) 2. [Training](#Training) ## Requirements * Python 3.6 or higher. * Cython 0.25.2 or any compatible version. * [PyTorch](http://pytorch.org/) 1.0.0+. * [EVALB](http://nlp.cs.nyu.edu/evalb/). Before starting, run `make` inside the `EVALB/` directory to compile an `evalb` executable. This will be called from Python for evaluation. * [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) PyTorch 1.0.0+ or any compatible version. #### Pre-trained Models (PyTorch) The following pre-trained models are available for download from Google Drive: * [`LIMIT-BERT`](https://drive.google.com/open?id=1fm0cK2A91iLG3lCpwowCCQSALnWS2X4i): PyTorch version, same setting with BERT-Large-WWM,loading model with [pytorch-transformers](https://github.com/huggingface/pytorch-transformers). ## How to use ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cooelf/limitbert") model = AutoModel.from_pretrained("cooelf/limitbert") ``` Please see our original repo for the training scripts. https://github.com/cooelf/LIMIT-BERT ## Training To train LIMIT-BERT, simply run: ``` sh run_limitbert.sh ``` ### Evaluation Instructions To test after setting model path: ``` sh test_bert.sh ``` ## Citation ``` @article{zhou2019limit, title={{LIMIT-BERT}: Linguistic informed multi-task {BERT}}, author={Zhou, Junru and Zhang, Zhuosheng and Zhao, Hai}, journal={arXiv preprint arXiv:1910.14296}, year={2019} } ```
{}
cooelf/limitbert
null
[ "transformers", "pytorch", "arxiv:1910.14296", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.14296" ]
[]
TAGS #transformers #pytorch #arxiv-1910.14296 #endpoints_compatible #region-us
# LIMIT-BERT Code and model for the *EMNLP 2020 Findings* paper: LIMIT-BERT: Linguistic Informed Multi-task BERT) ## Contents 1. Requirements 2. Training ## Requirements * Python 3.6 or higher. * Cython 0.25.2 or any compatible version. * PyTorch 1.0.0+. * EVALB. Before starting, run 'make' inside the 'EVALB/' directory to compile an 'evalb' executable. This will be called from Python for evaluation. * pytorch-transformers PyTorch 1.0.0+ or any compatible version. #### Pre-trained Models (PyTorch) The following pre-trained models are available for download from Google Drive: * 'LIMIT-BERT': PyTorch version, same setting with BERT-Large-WWM,loading model with pytorch-transformers. ## How to use Please see our original repo for the training scripts. URL ## Training To train LIMIT-BERT, simply run: ### Evaluation Instructions To test after setting model path:
[ "# LIMIT-BERT\n\nCode and model for the *EMNLP 2020 Findings* paper: \n\nLIMIT-BERT: Linguistic Informed Multi-task BERT)", "## Contents\n\n1. Requirements\n2. Training", "## Requirements\n\n* Python 3.6 or higher.\n* Cython 0.25.2 or any compatible version.\n* PyTorch 1.0.0+. \n* EVALB. Before starting, run 'make' inside the 'EVALB/' directory to compile an 'evalb' executable. This will be called from Python for evaluation. \n* pytorch-transformers PyTorch 1.0.0+ or any compatible version.", "#### Pre-trained Models (PyTorch)\nThe following pre-trained models are available for download from Google Drive:\n* 'LIMIT-BERT': \n PyTorch version, same setting with BERT-Large-WWM,loading model with pytorch-transformers.", "## How to use\n\n\n\nPlease see our original repo for the training scripts.\n\nURL", "## Training\n\nTo train LIMIT-BERT, simply run:", "### Evaluation Instructions\n\nTo test after setting model path:" ]
[ "TAGS\n#transformers #pytorch #arxiv-1910.14296 #endpoints_compatible #region-us \n", "# LIMIT-BERT\n\nCode and model for the *EMNLP 2020 Findings* paper: \n\nLIMIT-BERT: Linguistic Informed Multi-task BERT)", "## Contents\n\n1. Requirements\n2. Training", "## Requirements\n\n* Python 3.6 or higher.\n* Cython 0.25.2 or any compatible version.\n* PyTorch 1.0.0+. \n* EVALB. Before starting, run 'make' inside the 'EVALB/' directory to compile an 'evalb' executable. This will be called from Python for evaluation. \n* pytorch-transformers PyTorch 1.0.0+ or any compatible version.", "#### Pre-trained Models (PyTorch)\nThe following pre-trained models are available for download from Google Drive:\n* 'LIMIT-BERT': \n PyTorch version, same setting with BERT-Large-WWM,loading model with pytorch-transformers.", "## How to use\n\n\n\nPlease see our original repo for the training scripts.\n\nURL", "## Training\n\nTo train LIMIT-BERT, simply run:", "### Evaluation Instructions\n\nTo test after setting model path:" ]
fill-mask
transformers
# Cicero-Similis ## Model description A Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ## Intended uses & limitations #### How to use Normalize text using JV Replacement and tokenize using CLTK to separate enclitics such as "-que", then: ``` from transformers import BertForMaskedLM, AutoTokenizer, FillMaskPipeline tokenizer = AutoTokenizer.from_pretrained("cook/cicero-similis") model = BertForMaskedLM.from_pretrained("cook/cicero-similis") fill_mask = FillMaskPipeline(model=model, tokenizer=tokenizer, top_k=10_000) # Cicero, De Re Publica, VI, 32, 2 # "animal" is found in A, Q, PhD manuscripts # 'anima' H^1 Macr. et codd. Tusc. results = fill_mask("inanimum est enim omne quod pulsu agitatur externo; quod autem est [MASK],") ``` #### Limitations and bias Currently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model. ## Training data Trained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina. ## Training procedure 5 epochs, masked language modeling .15, effective batch size 32 ## Eval results A novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ### BibTeX entry and citation info TODO _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2.
{"language": ["la"], "license": "apache-2.0", "tags": ["language model"], "datasets": ["Tesserae", "Phi5", "Thomas Aquinas", "Patrologia Latina"]}
cook/cicero-similis
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "language model", "la", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "la" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #language model #la #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cicero-Similis ## Model description A Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ## Intended uses & limitations #### How to use Normalize text using JV Replacement and tokenize using CLTK to separate enclitics such as "-que", then: #### Limitations and bias Currently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model. ## Training data Trained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina. ## Training procedure 5 epochs, masked language modeling .15, effective batch size 32 ## Eval results A novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ### BibTeX entry and citation info TODO _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2.
[ "# Cicero-Similis", "## Model description\n\nA Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2.", "## Intended uses & limitations", "#### How to use\n\nNormalize text using JV Replacement and tokenize using CLTK to separate enclitics such as \"-que\", then:", "#### Limitations and bias\n\nCurrently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model.", "## Training data\n\nTrained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina.", "## Training procedure\n\n5 epochs, masked language modeling .15, effective batch size 32", "## Eval results\nA novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2.", "### BibTeX entry and citation info\nTODO\n_What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2." ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #language model #la #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cicero-Similis", "## Model description\n\nA Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2.", "## Intended uses & limitations", "#### How to use\n\nNormalize text using JV Replacement and tokenize using CLTK to separate enclitics such as \"-que\", then:", "#### Limitations and bias\n\nCurrently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model.", "## Training data\n\nTrained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina.", "## Training procedure\n\n5 epochs, masked language modeling .15, effective batch size 32", "## Eval results\nA novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2.", "### BibTeX entry and citation info\nTODO\n_What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,\nPublished in Ciceroniana On Line, Vol. V, #2." ]
text-generation
transformers
# Joreyar DialoGPT Model
{"tags": ["conversational"]}
cookirei/DialoGPT-medium-Joreyar
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Joreyar DialoGPT Model
[ "# Joreyar DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Joreyar DialoGPT Model" ]
feature-extraction
transformers
This is the SciBERT pretrained language model further fine-tuned on masked language modeling and cite-worthiness detection on the [CiteWorth](https://github.com/copenlu/cite-worth) dataset. Note that this model should be used for further fine-tuning on downstream scientific document understanding tasks.
{}
copenlu/citebert
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
This is the SciBERT pretrained language model further fine-tuned on masked language modeling and cite-worthiness detection on the CiteWorth dataset. Note that this model should be used for further fine-tuning on downstream scientific document understanding tasks.
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Uzbek news category classifier (based on UzBERT) UzBERT fine-tuned to classify news articles into one of the following categories: - дунё - жамият - жиноят - иқтисодиёт - маданият - реклама - саломатлик - сиёсат - спорт - фан ва техника - шоу-бизнес ## How to use ```python >>> from transformers import pipeline >>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier') >>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди. Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди. Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!""" >>> classifier(text) [{'label': 'спорт', 'score': 0.9865401983261108}] ``` ## Fine-tuning data Fine-tuned on ~60K news articles for 3 epochs.
{"language": "uz", "license": "mit", "tags": ["uzbek", "cyrillic", "news category classifier"], "datasets": ["webcrawl"]}
coppercitylabs/uzbek-news-category-classifier
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "uzbek", "cyrillic", "news category classifier", "uz", "dataset:webcrawl", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "uz" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #uzbek #cyrillic #news category classifier #uz #dataset-webcrawl #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# Uzbek news category classifier (based on UzBERT) UzBERT fine-tuned to classify news articles into one of the following categories: - дунё - жамият - жиноят - иқтисодиёт - маданият - реклама - саломатлик - сиёсат - спорт - фан ва техника - шоу-бизнес ## How to use ## Fine-tuning data Fine-tuned on ~60K news articles for 3 epochs.
[ "# Uzbek news category classifier (based on UzBERT)\n\nUzBERT fine-tuned to classify news articles into one of the following\ncategories:\n\n- дунё\n- жамият\n- жиноят\n- иқтисодиёт\n- маданият\n- реклама\n- саломатлик\n- сиёсат\n- спорт\n- фан ва техника\n- шоу-бизнес", "## How to use", "## Fine-tuning data\nFine-tuned on ~60K news articles for 3 epochs." ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #uzbek #cyrillic #news category classifier #uz #dataset-webcrawl #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Uzbek news category classifier (based on UzBERT)\n\nUzBERT fine-tuned to classify news articles into one of the following\ncategories:\n\n- дунё\n- жамият\n- жиноят\n- иқтисодиёт\n- маданият\n- реклама\n- саломатлик\n- сиёсат\n- спорт\n- фан ва техника\n- шоу-бизнес", "## How to use", "## Fine-tuning data\nFine-tuned on ~60K news articles for 3 epochs." ]
fill-mask
transformers
# UzBERT base model (uncased) Pretrained model on Uzbek language (Cyrillic script) using a masked language modeling and next sentence prediction objectives. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='coppercitylabs/uzbert-base-uncased') >>> unmasker("Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг [MASK], мутафаккири ва давлат арбоби бўлган.") [ { 'token_str': 'шоири', 'token': 13587, 'score': 0.7974384427070618, 'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг шоири, мутафаккир ##и ва давлат арбоби бўлган.' }, { 'token_str': 'олими', 'token': 18500, 'score': 0.09166576713323593, 'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг олими, мутафаккир ##и ва давлат арбоби бўлган.' }, { 'token_str': 'асосчиси', 'token': 7469, 'score': 0.02451123297214508, 'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг асосчиси, мутафаккир ##и ва давлат арбоби бўлган.' }, { 'token_str': 'ёзувчиси', 'token': 22439, 'score': 0.017601722851395607, 'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг ёзувчиси, мутафаккир ##и ва давлат арбоби бўлган.' }, { 'token_str': 'устози', 'token': 11494, 'score': 0.010115668177604675, 'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг устози, мутафаккир ##и ва давлат арбоби бўлган.' } ] ``` ## Training data UzBERT model was pretrained on \~625K news articles (\~142M words). ## BibTeX entry and citation info ```bibtex @misc{mansurov2021uzbert, title={{UzBERT: pretraining a BERT model for Uzbek}}, author={B. Mansurov and A. Mansurov}, year={2021}, eprint={2108.09814}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "uz", "license": "mit", "tags": ["uzbert", "uzbek", "bert", "cyrillic"], "datasets": ["webcrawl"]}
coppercitylabs/uzbert-base-uncased
null
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "uzbert", "uzbek", "cyrillic", "uz", "dataset:webcrawl", "arxiv:2108.09814", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2108.09814" ]
[ "uz" ]
TAGS #transformers #pytorch #safetensors #bert #fill-mask #uzbert #uzbek #cyrillic #uz #dataset-webcrawl #arxiv-2108.09814 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# UzBERT base model (uncased) Pretrained model on Uzbek language (Cyrillic script) using a masked language modeling and next sentence prediction objectives. ## How to use You can use this model directly with a pipeline for masked language modeling: ## Training data UzBERT model was pretrained on \~625K news articles (\~142M words). ## BibTeX entry and citation info
[ "# UzBERT base model (uncased)\n\nPretrained model on Uzbek language (Cyrillic script) using a masked\nlanguage modeling and next sentence prediction objectives.", "## How to use\n\nYou can use this model directly with a pipeline for masked language modeling:", "## Training data\n\nUzBERT model was pretrained on \\~625K news articles (\\~142M words).", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #uzbert #uzbek #cyrillic #uz #dataset-webcrawl #arxiv-2108.09814 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# UzBERT base model (uncased)\n\nPretrained model on Uzbek language (Cyrillic script) using a masked\nlanguage modeling and next sentence prediction objectives.", "## How to use\n\nYou can use this model directly with a pipeline for masked language modeling:", "## Training data\n\nUzBERT model was pretrained on \\~625K news articles (\\~142M words).", "## BibTeX entry and citation info" ]
text-generation
transformers
# Rick Sanchez
{"tags": ["conversational"]}
cosmic/DialoGPT-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick Sanchez
[ "# Rick Sanchez" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick Sanchez" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
cosmicray001/prod-harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
cosmicray001/small-harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text2text-generation
transformers
# Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` You can also test mask filling performance using `[MASK]` token like this. ``` [BOS] [MASK] 먹었어? [EOS] ``` ## Benchmark <style> table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } </style> <table> <tr> <th>Dataset</th> <td>KLUE NLI dev</th> <td>NSMC test</td> <td>QuestionPair test</td> <td colspan="2">KLUE TC dev</td> <td colspan="3">KLUE STS dev</td> <td colspan="3">KorSTS dev</td> <td colspan="2">HateSpeech dev</td> </tr> <tr> <th>Metric</th> <!-- KLUE NLI --> <td>Acc</th> <!-- NSMC --> <td>Acc</td> <!-- QuestionPair --> <td>Acc</td> <!-- KLUE TC --> <td>Acc</td> <td>F1</td> <!-- KLUE STS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- KorSTS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- HateSpeech --> <td>Bias Acc</td> <td>Hate Acc</td> </tr> <tr> <th>Score</th> <!-- KLUE NLI --> <td>0.7390</th> <!-- NSMC --> <td>0.8877</td> <!-- QuestionPair --> <td>0.9208</td> <!-- KLUE TC --> <td>0.8667</td> <td>0.8637</td> <!-- KLUE STS --> <td>0.7654</td> <td>0.8090</td> <td>0.8040</td> <!-- KorSTS --> <td>0.8067</td> <td>0.7909</td> <td>0.7784</td> <!-- HateSpeech --> <td>0.8280</td> <td>0.5669</td> </tr> </table> - The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab. ## Used Datasets ### [모두의 말뭉치](https://corpus.korean.go.kr/) - 일상 대화 말뭉치 2020 - 구어 말뭉치 - 문어 말뭉치 - 신문 말뭉치 ### AIhub - [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717) - [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714) - [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978) - [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105) - [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718) ### [세종 말뭉치](https://ithub.korean.go.kr/)
{"language": "ko"}
cosmoquester/bart-ko-base
null
[ "transformers", "pytorch", "tf", "bart", "text2text-generation", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us
Pretrained BART in Korean ========================= This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by TPU Research Cloud program. The script which is used to pre-train model is here. When you use the reference API, you must wrap the sentence with '[BOS]' and '[EOS]' like below example. You can also test mask filling performance using '[MASK]' token like this. Benchmark --------- table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } * The performance was measured using the notebooks here with colab. Used Datasets ------------- ### 모두의 말뭉치 * 일상 대화 말뭉치 2020 * 구어 말뭉치 * 문어 말뭉치 * 신문 말뭉치 ### AIhub * 개방데이터 전문분야말뭉치 * 개방데이터 한국어대화요약 * 개방데이터 감성 대화 말뭉치 * 개방데이터 한국어 음성 * 개방데이터 한국어 SNS ### 세종 말뭉치
[ "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
[ "TAGS\n#transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us \n", "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
text2text-generation
transformers
# Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` You can also test mask filling performance using `[MASK]` token like this. ``` [BOS] [MASK] 먹었어? [EOS] ``` ## Benchmark <style> table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } </style> <table> <tr> <th>Dataset</th> <td>KLUE NLI dev</th> <td>NSMC test</td> <td>QuestionPair test</td> <td colspan="2">KLUE TC dev</td> <td colspan="3">KLUE STS dev</td> <td colspan="3">KorSTS dev</td> <td colspan="2">HateSpeech dev</td> </tr> <tr> <th>Metric</th> <!-- KLUE NLI --> <td>Acc</th> <!-- NSMC --> <td>Acc</td> <!-- QuestionPair --> <td>Acc</td> <!-- KLUE TC --> <td>Acc</td> <td>F1</td> <!-- KLUE STS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- KorSTS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- HateSpeech --> <td>Bias Acc</td> <td>Hate Acc</td> </tr> <tr> <th>Score</th> <!-- KLUE NLI --> <td>0.5253</th> <!-- NSMC --> <td>0.8425</td> <!-- QuestionPair --> <td>0.8945</td> <!-- KLUE TC --> <td>0.8047</td> <td>0.7988</td> <!-- KLUE STS --> <td>0.7411</td> <td>0.7471</td> <td>0.7399</td> <!-- KorSTS --> <td>0.7725</td> <td>0.6503</td> <td>0.6191</td> <!-- HateSpeech --> <td>0.7537</td> <td>0.5605</td> </tr> </table> - The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab. ## Used Datasets ### [모두의 말뭉치](https://corpus.korean.go.kr/) - 일상 대화 말뭉치 2020 - 구어 말뭉치 - 문어 말뭉치 - 신문 말뭉치 ### AIhub - [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717) - [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714) - [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978) - [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105) - [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718) ### [세종 말뭉치](https://ithub.korean.go.kr/)
{"language": "ko"}
cosmoquester/bart-ko-mini
null
[ "transformers", "pytorch", "tf", "bart", "text2text-generation", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us
Pretrained BART in Korean ========================= This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by TPU Research Cloud program. The script which is used to pre-train model is here. When you use the reference API, you must wrap the sentence with '[BOS]' and '[EOS]' like below example. You can also test mask filling performance using '[MASK]' token like this. Benchmark --------- table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } * The performance was measured using the notebooks here with colab. Used Datasets ------------- ### 모두의 말뭉치 * 일상 대화 말뭉치 2020 * 구어 말뭉치 * 문어 말뭉치 * 신문 말뭉치 ### AIhub * 개방데이터 전문분야말뭉치 * 개방데이터 한국어대화요약 * 개방데이터 감성 대화 말뭉치 * 개방데이터 한국어 음성 * 개방데이터 한국어 SNS ### 세종 말뭉치
[ "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
[ "TAGS\n#transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us \n", "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
text2text-generation
transformers
# Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` You can also test mask filling performance using `[MASK]` token like this. ``` [BOS] [MASK] 먹었어? [EOS] ``` ## Benchmark <style> table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } </style> <table> <tr> <th>Dataset</th> <td>KLUE NLI dev</th> <td>NSMC test</td> <td>QuestionPair test</td> <td colspan="2">KLUE TC dev</td> <td colspan="3">KLUE STS dev</td> <td colspan="3">KorSTS dev</td> <td colspan="2">HateSpeech dev</td> </tr> <tr> <th>Metric</th> <!-- KLUE NLI --> <td>Acc</th> <!-- NSMC --> <td>Acc</td> <!-- QuestionPair --> <td>Acc</td> <!-- KLUE TC --> <td>Acc</td> <td>F1</td> <!-- KLUE STS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- KorSTS --> <td>F1</td> <td>Pearson</td> <td>Spearman</td> <!-- HateSpeech --> <td>Bias Acc</td> <td>Hate Acc</td> </tr> <tr> <th>Score</th> <!-- KLUE NLI --> <td>0.639</th> <!-- NSMC --> <td>0.8721</td> <!-- QuestionPair --> <td>0.905</td> <!-- KLUE TC --> <td>0.8551</td> <td>0.8515</td> <!-- KLUE STS --> <td>0.7406</td> <td>0.7593</td> <td>0.7551</td> <!-- KorSTS --> <td>0.7897</td> <td>0.7269</td> <td>0.7037</td> <!-- HateSpeech --> <td>0.8068</td> <td>0.5966</td> </tr> </table> - The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab. ## Used Datasets ### [모두의 말뭉치](https://corpus.korean.go.kr/) - 일상 대화 말뭉치 2020 - 구어 말뭉치 - 문어 말뭉치 - 신문 말뭉치 ### AIhub - [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717) - [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714) - [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978) - [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105) - [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718) ### [세종 말뭉치](https://ithub.korean.go.kr/)
{"language": "ko"}
cosmoquester/bart-ko-small
null
[ "transformers", "pytorch", "tf", "bart", "text2text-generation", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us
Pretrained BART in Korean ========================= This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by TPU Research Cloud program. The script which is used to pre-train model is here. When you use the reference API, you must wrap the sentence with '[BOS]' and '[EOS]' like below example. You can also test mask filling performance using '[MASK]' token like this. Benchmark --------- table { border-collapse: collapse; border-style: hidden; width: 100%; } td, th { border: 1px solid #4d5562; padding: 8px; } * The performance was measured using the notebooks here with colab. Used Datasets ------------- ### 모두의 말뭉치 * 일상 대화 말뭉치 2020 * 구어 말뭉치 * 문어 말뭉치 * 신문 말뭉치 ### AIhub * 개방데이터 전문분야말뭉치 * 개방데이터 한국어대화요약 * 개방데이터 감성 대화 말뭉치 * 개방데이터 한국어 음성 * 개방데이터 한국어 SNS ### 세종 말뭉치
[ "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
[ "TAGS\n#transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us \n", "### 모두의 말뭉치\n\n\n* 일상 대화 말뭉치 2020\n* 구어 말뭉치\n* 문어 말뭉치\n* 신문 말뭉치", "### AIhub\n\n\n* 개방데이터 전문분야말뭉치\n* 개방데이터 한국어대화요약\n* 개방데이터 감성 대화 말뭉치\n* 개방데이터 한국어 음성\n* 개방데이터 한국어 SNS", "### 세종 말뭉치" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-eo Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eo", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Esperanto test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=2000))) ``` **Test Result**: 12.31 % ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "eo", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Esperanto by Charles Pierse", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice eo", "type": "common_voice", "args": "eo"}, "metrics": [{"type": "wer", "value": 12.31, "name": "Test WER"}]}]}]}
cpierse/wav2vec2-large-xlsr-53-esperanto
null
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "eo", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "eo" ]
TAGS #transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #eo #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Wav2Vec2-Large-XLSR-53-eo Fine-tuned facebook/wav2vec2-large-xlsr-53 on esperanto using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Esperanto test data of Common Voice. Test Result: 12.31 % ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-eo \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on esperanto using the Common Voice dataset. \n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Esperanto test data of Common Voice. \n\n\n\n\nTest Result: 12.31 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #eo #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Wav2Vec2-Large-XLSR-53-eo \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on esperanto using the Common Voice dataset. \n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Esperanto test data of Common Voice. \n\n\n\n\nTest Result: 12.31 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Irish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Irish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Irish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ga-IE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-irish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 43.06 %
{"language": "ga-IE", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "cpierse/wav2vec2-large-xlsr-53-irish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ga-IE", "type": "common_voice", "args": "ga-IE"}, "metrics": [{"type": "wer", "value": 43.06, "name": "Test WER"}]}]}]}
cpierse/wav2vec2-large-xlsr-53-irish
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ga-IE" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Irish Fine-tuned facebook/wav2vec2-large-xlsr-53 on Irish using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Irish test data of Common Voice. Test Result: 43.06 %
[ "# Wav2Vec2-Large-XLSR-53-Irish \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Irish using the Common Voice dataset. \n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Irish test data of Common Voice. \n\n\n\nTest Result: 43.06 %" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Irish \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Irish using the Common Voice dataset. \n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Irish test data of Common Voice. \n\n\n\nTest Result: 43.06 %" ]
token-classification
transformers
# Named Entity Recognition based on FERNET-CC_sk This model is a fine-tuned version of [fav-kky/FERNET-CC_sk](https://huggingface.co/fav-kky/FERNET-CC_sk) on the Slovak wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1763 - Precision: 0.9360 - Recall: 0.9472 - F1: 0.9416 - Accuracy: 0.9789 ## Intended uses & limitation Supported classes: LOCATION, PERSON, ORGANIZATION ``` from transformers import pipeline ner_pipeline = pipeline(task='ner', model='crabz/slovakbert-ner') input_sentence = "Minister financií a líder mandátovo najsilnejšieho hnutia OĽaNO Igor Matovič upozorňuje, že následky tretej vlny budú na Slovensku veľmi veľké." classifications = ner_pipeline(input_sentence) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1259 | 1.0 | 834 | 0.1095 | 0.8963 | 0.9182 | 0.9071 | 0.9697 | | 0.071 | 2.0 | 1668 | 0.0974 | 0.9270 | 0.9357 | 0.9313 | 0.9762 | | 0.0323 | 3.0 | 2502 | 0.1259 | 0.9257 | 0.9330 | 0.9293 | 0.9745 | | 0.0175 | 4.0 | 3336 | 0.1347 | 0.9241 | 0.9360 | 0.9300 | 0.9756 | | 0.0156 | 5.0 | 4170 | 0.1407 | 0.9337 | 0.9404 | 0.9370 | 0.9780 | | 0.0062 | 6.0 | 5004 | 0.1522 | 0.9267 | 0.9410 | 0.9338 | 0.9774 | | 0.0055 | 7.0 | 5838 | 0.1559 | 0.9322 | 0.9429 | 0.9375 | 0.9780 | | 0.0024 | 8.0 | 6672 | 0.1733 | 0.9321 | 0.9438 | 0.9379 | 0.9779 | | 0.0009 | 9.0 | 7506 | 0.1765 | 0.9347 | 0.9468 | 0.9407 | 0.9784 | | 0.0002 | 10.0 | 8340 | 0.1763 | 0.9360 | 0.9472 | 0.9416 | 0.9789 | ### Framework versions - Transformers 4.14.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["sk"], "license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "inference": false, "model-index": [{"name": "fernet-sk-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann sk", "type": "wikiann", "args": "sk"}, "metrics": [{"type": "precision", "value": 0.9359821760118826, "name": "Precision"}, {"type": "recall", "value": 0.9472378804960541, "name": "Recall"}, {"type": "f1", "value": 0.9415763914830033, "name": "F1"}, {"type": "accuracy", "value": 0.9789063466534702, "name": "Accuracy"}]}]}]}
crabz/FERNET-CC_sk-ner
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "sk", "dataset:wikiann", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #transformers #pytorch #bert #token-classification #generated_from_trainer #sk #dataset-wikiann #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #region-us
Named Entity Recognition based on FERNET-CC\_sk =============================================== This model is a fine-tuned version of fav-kky/FERNET-CC\_sk on the Slovak wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.1763 * Precision: 0.9360 * Recall: 0.9472 * F1: 0.9416 * Accuracy: 0.9789 Intended uses & limitation -------------------------- Supported classes: LOCATION, PERSON, ORGANIZATION Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10.0 ### Training results ### Framework versions * Transformers 4.14.0.dev0 * Pytorch 1.10.0 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #sk #dataset-wikiann #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
# Named Entity Recognition based on bertoslav-limited This model is a fine-tuned version of [crabz/bertoslav-limited](https://huggingface.co/crabz/bertoslav-limited) on the Slovak wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2119 - Precision: 0.8986 - Recall: 0.9174 - F1: 0.9079 - Accuracy: 0.9700 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2953 | 1.0 | 834 | 0.1516 | 0.8413 | 0.8647 | 0.8529 | 0.9549 | | 0.0975 | 2.0 | 1668 | 0.1304 | 0.8787 | 0.9056 | 0.8920 | 0.9658 | | 0.0487 | 3.0 | 2502 | 0.1405 | 0.8916 | 0.8958 | 0.8937 | 0.9660 | | 0.025 | 4.0 | 3336 | 0.1658 | 0.8850 | 0.9116 | 0.8981 | 0.9669 | | 0.0161 | 5.0 | 4170 | 0.1739 | 0.8974 | 0.9127 | 0.9050 | 0.9693 | | 0.0074 | 6.0 | 5004 | 0.1888 | 0.8900 | 0.9144 | 0.9020 | 0.9687 | | 0.0051 | 7.0 | 5838 | 0.1996 | 0.8946 | 0.9145 | 0.9044 | 0.9693 | | 0.0039 | 8.0 | 6672 | 0.2052 | 0.8993 | 0.9158 | 0.9075 | 0.9697 | | 0.0024 | 9.0 | 7506 | 0.2112 | 0.8946 | 0.9171 | 0.9057 | 0.9696 | | 0.0018 | 10.0 | 8340 | 0.2119 | 0.8986 | 0.9174 | 0.9079 | 0.9700 | ### Framework versions - Transformers 4.14.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["sk"], "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "inference": false, "model-index": [{"name": "bertoslav-limited-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann sk", "type": "wikiann", "args": "sk"}, "metrics": [{"type": "precision", "value": 0.8985571260306242, "name": "Precision"}, {"type": "recall", "value": 0.9173994738819993, "name": "Recall"}, {"type": "f1", "value": 0.9078805459481573, "name": "F1"}, {"type": "accuracy", "value": 0.9700235061239639, "name": "Accuracy"}]}]}]}
crabz/bertoslav-limited-ner
null
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "sk", "dataset:wikiann", "model-index", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #transformers #pytorch #distilbert #token-classification #generated_from_trainer #sk #dataset-wikiann #model-index #autotrain_compatible #region-us
Named Entity Recognition based on bertoslav-limited =================================================== This model is a fine-tuned version of crabz/bertoslav-limited on the Slovak wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.2119 * Precision: 0.8986 * Recall: 0.9174 * F1: 0.9079 * Accuracy: 0.9700 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10.0 ### Training results ### Framework versions * Transformers 4.14.0.dev0 * Pytorch 1.10.0 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sk #dataset-wikiann #model-index #autotrain_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
# Named Entity Recognition based on SlovakBERT This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on the Slovak wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - Precision: 0.9327 - Recall: 0.9470 - F1: 0.9398 - Accuracy: 0.9785 ## Intended uses & limitations Supported classes: LOCATION, PERSON, ORGANIZATION ``` from transformers import pipeline ner_pipeline = pipeline(task='ner', model='crabz/slovakbert-ner') input_sentence = "Minister financií a líder mandátovo najsilnejšieho hnutia OĽaNO Igor Matovič upozorňuje, že následky tretej vlny budú na Slovensku veľmi veľké." classifications = ner_pipeline(input_sentence) ``` with `displaCy`: ``` import spacy from spacy import displacy ner_map = {0: '0', 1: 'B-OSOBA', 2: 'I-OSOBA', 3: 'B-ORGANIZÁCIA', 4: 'I-ORGANIZÁCIA', 5: 'B-LOKALITA', 6: 'I-LOKALITA'} entities = [] for i in range(len(classifications)): if classifications[i]['entity'] != 0: if ner_map[classifications[i]['entity']][0] == 'B': j = i + 1 while j < len(classifications) and ner_map[classifications[j]['entity']][0] == 'I': j += 1 entities.append((ner_map[classifications[i]['entity']].split('-')[1], classifications[i]['start'], classifications[j - 1]['end'])) nlp = spacy.blank("en") # it should work with any language doc = nlp(input_sentence) ents = [] for ee in entities: ents.append(doc.char_span(ee[1], ee[2], ee[0])) doc.ents = ents options = {"ents": ["OSOBA", "ORGANIZÁCIA", "LOKALITA"], "colors": {"OSOBA": "lightblue", "ORGANIZÁCIA": "lightcoral", "LOKALITA": "lightgreen"}} displacy_html = displacy.render(doc, style="ent", options=options) ``` <div class="entities" style="line-height: 2.5; direction: ltr">Minister financií a líder mandátovo najsilnejšieho hnutia <mark class="entity" style="background: lightcoral; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> OĽaNO <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">ORGANIZÁCIA</span> </mark> <mark class="entity" style="background: lightblue; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Igor Matovič <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">OSOBA</span> </mark> upozorňuje, že následky tretej vlny budú na <mark class="entity" style="background: lightgreen; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Slovensku <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOKALITA</span> </mark> veľmi veľké.</div> ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2342 | 1.0 | 625 | 0.1233 | 0.8891 | 0.9076 | 0.8982 | 0.9667 | | 0.1114 | 2.0 | 1250 | 0.1079 | 0.9118 | 0.9269 | 0.9193 | 0.9725 | | 0.0817 | 3.0 | 1875 | 0.1093 | 0.9173 | 0.9315 | 0.9243 | 0.9747 | | 0.0438 | 4.0 | 2500 | 0.1076 | 0.9188 | 0.9353 | 0.9270 | 0.9743 | | 0.028 | 5.0 | 3125 | 0.1230 | 0.9143 | 0.9387 | 0.9264 | 0.9744 | | 0.0256 | 6.0 | 3750 | 0.1204 | 0.9246 | 0.9423 | 0.9334 | 0.9765 | | 0.018 | 7.0 | 4375 | 0.1332 | 0.9292 | 0.9416 | 0.9353 | 0.9770 | | 0.0107 | 8.0 | 5000 | 0.1339 | 0.9280 | 0.9427 | 0.9353 | 0.9769 | | 0.0079 | 9.0 | 5625 | 0.1368 | 0.9326 | 0.9442 | 0.9383 | 0.9785 | | 0.0065 | 10.0 | 6250 | 0.1490 | 0.9284 | 0.9445 | 0.9364 | 0.9772 | | 0.0061 | 11.0 | 6875 | 0.1566 | 0.9328 | 0.9433 | 0.9380 | 0.9778 | | 0.0031 | 12.0 | 7500 | 0.1555 | 0.9339 | 0.9473 | 0.9406 | 0.9787 | | 0.0024 | 13.0 | 8125 | 0.1548 | 0.9349 | 0.9462 | 0.9405 | 0.9787 | | 0.0015 | 14.0 | 8750 | 0.1562 | 0.9330 | 0.9469 | 0.9399 | 0.9788 | | 0.0013 | 15.0 | 9375 | 0.1600 | 0.9327 | 0.9470 | 0.9398 | 0.9785 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.10.3
{"language": ["sk"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "inference": false, "widget": [{"text": "Zuzana \u010caputov\u00e1 sa narodila 21. j\u00fana 1973 v Bratislave.", "example_title": "Named Entity Recognition"}], "base_model": "gerulata/slovakbert", "model-index": [{"name": "slovakbert-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sk"}, "metrics": [{"type": "precision", "value": 0.9327115256495669, "name": "Precision"}, {"type": "recall", "value": 0.9470124013528749, "name": "Recall"}, {"type": "f1", "value": 0.9398075632132469, "name": "F1"}, {"type": "accuracy", "value": 0.9785228256835333, "name": "Accuracy"}]}]}]}
crabz/slovakbert-ner
null
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "sk", "dataset:wikiann", "base_model:gerulata/slovakbert", "license:mit", "model-index", "autotrain_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #transformers #pytorch #roberta #token-classification #generated_from_trainer #sk #dataset-wikiann #base_model-gerulata/slovakbert #license-mit #model-index #autotrain_compatible #has_space #region-us
Named Entity Recognition based on SlovakBERT ============================================ This model is a fine-tuned version of gerulata/slovakbert on the Slovak wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.1600 * Precision: 0.9327 * Recall: 0.9470 * F1: 0.9398 * Accuracy: 0.9785 Intended uses & limitations --------------------------- Supported classes: LOCATION, PERSON, ORGANIZATION with 'displaCy': Minister financií a líder mandátovo najsilnejšieho hnutia OĽaNO ORGANIZÁCIA Igor Matovič OSOBA upozorňuje, že následky tretej vlny budú na Slovensku LOKALITA veľmi veľké. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 15.0 ### Training results ### Framework versions * Transformers 4.13.0.dev0 * Pytorch 1.10.0+cu113 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #sk #dataset-wikiann #base_model-gerulata/slovakbert #license-mit #model-index #autotrain_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Frisian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Frisian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fy-NL", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-frisian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\u2013\u2014\;\:\"\\%\\\]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 19.11 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "fy-NL", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Frisian XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fy-NL", "type": "common_voice", "args": "fy-NL"}, "metrics": [{"type": "wer", "value": 19.11, "name": "Test WER"}]}]}]}
crang/wav2vec2-large-xlsr-53-frisian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fy-NL" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Frisian Fine-tuned facebook/wav2vec2-large-xlsr-53 on Frisian using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Frisian test data of Common Voice. Test Result: 19.11 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Frisian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Frisian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Frisian test data of Common Voice.\n\n\n\n\nTest Result: 19.11 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Frisian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Frisian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Frisian test data of Common Voice.\n\n\n\n\nTest Result: 19.11 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Tatar Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar") model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Tatar test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar") model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\u2013\u2014\;\:\"\\%\\\]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 30.93 % ## Training The Common Voice `train` and `validation` datasets were used for training.
{"language": "tt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Tatar XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tt", "type": "common_voice", "args": "tt"}, "metrics": [{"type": "wer", "value": 30.93, "name": "Test WER"}]}]}]}
crang/wav2vec2-large-xlsr-53-tatar
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tt", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tt" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Tatar Fine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Tatar test data of Common Voice. Test Result: 30.93 % ## Training The Common Voice 'train' and 'validation' datasets were used for training.
[ "# Wav2Vec2-Large-XLSR-53-Tatar\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Tatar test data of Common Voice.\n\n\n\n\nTest Result: 30.93 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Tatar\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tatar using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Tatar test data of Common Voice.\n\n\n\n\nTest Result: 30.93 %", "## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training." ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "bg", "mk"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Bg
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "bg", "mk", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "bg", "mk" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #bg #mk #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #bg #mk #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "cs"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Cs
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "cs", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "cs" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #cs #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #cs #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) 6. NER-UK (LOC, MISC, ORG, PER) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "ru", "bg", "mk", "uk", "fi"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Cyrillic1
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "ru", "bg", "mk", "uk", "fi", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "ru", "bg", "mk", "uk", "fi" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #ru #bg #mk #uk #fi #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) 6. NER-UK (LOC, MISC, ORG, PER) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #ru #bg #mk #uk #fi #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) 6. NER-UK (LOC, MISC, ORG, PER) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "ru", "bg", "mk", "uk", "fi"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Cyrillic2
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "ru", "bg", "mk", "uk", "fi", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "ru", "bg", "mk", "uk", "fi" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #ru #bg #mk #uk #fi #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) 6. NER-UK (LOC, MISC, ORG, PER) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #ru #bg #mk #uk #fi #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. SSJ500k (LOC, MISC, ORG, PER) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "cs", "pl", "sl", "fi"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Latin1
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "cs", "pl", "sl", "fi", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "cs", "pl", "sl", "fi" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #cs #pl #sl #fi #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. SSJ500k (LOC, MISC, ORG, PER) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #cs #pl #sl #fi #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. SSJ500k (LOC, MISC, ORG, PER) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "cs", "pl", "sl", "fi"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Latin2
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "cs", "pl", "sl", "fi", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "cs", "pl", "sl", "fi" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #cs #pl #sl #fi #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SlavNER 17 (LOC, MISC, ORG, PER) 4. SSJ500k (LOC, MISC, ORG, PER) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME) 7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #cs #pl #sl #fi #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a Polish NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on Polish BERT and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. NKJP (DATE, GEOPOLIT, LOC, ORG, PER, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["pl"], "license": "mit", "tags": ["polish_bert", "ner"]}
creat89/NER_FEDA_Pl
null
[ "transformers", "pytorch", "bert", "polish_bert", "ner", "pl", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "pl" ]
TAGS #transformers #pytorch #bert #polish_bert #ner #pl #license-mit #endpoints_compatible #region-us
This is a Polish NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on Polish BERT and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 5. KPWr (EVT, LOC, ORG, PER, PRO) 6. NKJP (DATE, GEOPOLIT, LOC, ORG, PER, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #polish_bert #ner #pl #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a Russian NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on RuBERT and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["ru"], "license": "mit", "tags": ["rubert", "ner"]}
creat89/NER_FEDA_Ru
null
[ "transformers", "pytorch", "bert", "rubert", "ner", "ru", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #bert #rubert #ner #ru #license-mit #endpoints_compatible #region-us
This is a Russian NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on RuBERT and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 4. CNE5 (GEOPOLIT, LOC, MEDIA, PER, ORG) 5. FactRuEval (LOC, ORG, PER) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #rubert #ner #ru #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on CroSloEngual (https://huggingface.co/EMBEDDIA/crosloengual-bert) and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SSJ500k (LOC, MISC, ORG, PER) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["hr", "sl", "en", "multilingual"], "license": "mit", "tags": ["CroSloEngual", "ner"]}
creat89/NER_FEDA_Sl
null
[ "transformers", "pytorch", "bert", "CroSloEngual", "ner", "hr", "sl", "en", "multilingual", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hr", "sl", "en", "multilingual" ]
TAGS #transformers #pytorch #bert #CroSloEngual #ner #hr #sl #en #multilingual #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on CroSloEngual (URL and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. SSJ500k (LOC, MISC, ORG, PER) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #CroSloEngual #ner #hr #sl #en #multilingual #license-mit #endpoints_compatible #region-us \n" ]
null
transformers
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. NER-UK (LOC, MISC, ORG, PER) 4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).
{"language": ["multilingual", "uk"], "license": "mit", "tags": ["labse", "ner"]}
creat89/NER_FEDA_Uk
null
[ "transformers", "pytorch", "bert", "labse", "ner", "multilingual", "uk", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual", "uk" ]
TAGS #transformers #pytorch #bert #labse #ner #multilingual #uk #license-mit #endpoints_compatible #region-us
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats: 1. Wikiann (LOC, PER, ORG) 2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO) 3. NER-UK (LOC, MISC, ORG, PER) 4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME) PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date, GEOPOLIT: Geopolitical, You can select the tagset to use in the output by configuring the model. This models manages differently uppercase words. More information about the model can be found in the paper (URL and GitHub repository (URL
[]
[ "TAGS\n#transformers #pytorch #bert #labse #ner #multilingual #uk #license-mit #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# MyModel ## Model description This is the `BART-TL-all` model from the paper [BART-TL: Weakly-Supervised Topic Label Generation](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf). We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works. For more details not covered here, you can read the paper or look at the open-source implementation: https://github.com/CristianViorelPopa/BART-TL-topic-label-generation. There are two models made available from the paper: * [BART-TL-all](https://huggingface.co/cristian-popa/bart-tl-all) * [BART-TL-ng](https://huggingface.co/cristian-popa/bart-tl-ng) ## Intended uses & limitations #### How to use The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM mname = "cristian-popa/bart-tl-all" tokenizer = AutoTokenizer.from_pretrained(mname) model = AutoModelForSeq2SeqLM.from_pretrained(mname) input = "site web google search website online internet social content user" enc = tokenizer(input, return_tensors="pt", truncation=True, padding="max_length", max_length=128) outputs = model.generate( input_ids=enc.input_ids, attention_mask=enc.attention_mask, max_length=15, min_length=1, do_sample=False, num_beams=25, length_penalty=1.0, repetition_penalty=1.5 ) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # application programming interface ``` #### Limitations and bias The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy. ## Training data The model was fine-tuned on 5 different StackExchange corpora (see https://archive.org/download/stackexchange for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here. ## Training procedure The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the [NETL](https://www.aclweb.org/anthology/C16-1091.pdf) method, along with other heuristic labels, such as n-grams from the topics, relevant sentences in the corpora and noun phrases. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the [paper](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf) or by following [this notebook](https://github.com/CristianViorelPopa/BART-TL-topic-label-generation/blob/main/notebooks/end_to_end_workflow.ipynb). ## Eval results model | Top-1 Avg. | Top-3 Avg. | Top-5 Avg. | nDCG-1 | nDCG-3 | nDCG-5 ------------|------------|------------|------------|--------|--------|------- NETL (U) | 2.66 | 2.59 | 2.50 | 0.83 | 0.85 | 0.87 NETL (S) | 2.74 | 2.57 | 2.49 | 0.88 | 0.85 | 0.88 BART-TL-all | 2.64 | 2.52 | 2.43 | 0.83 | 0.84 | 0.87 BART-TL-ng | 2.62 | 2.50 | 2.33 | 0.82 | 0.84 | 0.85 ### BibTeX entry and citation info ```bibtex @inproceedings{popa-rebedea-2021-bart, title = "{BART}-{TL}: Weakly-Supervised Topic Label Generation", author = "Popa, Cristian and Rebedea, Traian", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.eacl-main.121", pages = "1418--1425", abstract = "We propose a novel solution for assigning labels to topic models by using multiple weak labelers. The method leverages generative transformers to learn accurate representations of the most important topic terms and candidate labels. This is achieved by fine-tuning pre-trained BART models on a large number of potential labels generated by state of the art non-neural models for topic labeling, enriched with different techniques. The proposed BART-TL model is able to generate valuable and novel labels in a weakly-supervised manner and can be improved by adding other weak labelers or distant supervision on similar tasks.", } ```
{"language": ["en"], "license": "apache-2.0", "tags": ["topic labeling"], "metrics": ["ndcg"], "<!-- thumbnail": "https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png -->"}
cristian-popa/bart-tl-all
null
[ "transformers", "pytorch", "bart", "text2text-generation", "topic labeling", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bart #text2text-generation #topic labeling #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
MyModel ======= Model description ----------------- This is the 'BART-TL-all' model from the paper BART-TL: Weakly-Supervised Topic Label Generation. We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works. For more details not covered here, you can read the paper or look at the open-source implementation: URL There are two models made available from the paper: * BART-TL-all * BART-TL-ng Intended uses & limitations --------------------------- #### How to use The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model. #### Limitations and bias The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy. Training data ------------- The model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here. Training procedure ------------------ The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with other heuristic labels, such as n-grams from the topics, relevant sentences in the corpora and noun phrases. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook. Eval results ------------ ### BibTeX entry and citation info
[ "#### How to use\n\n\nThe model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.", "#### Limitations and bias\n\n\nThe model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.\n\n\nTraining data\n-------------\n\n\nThe model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.\n\n\nTraining procedure\n------------------\n\n\nThe large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with other heuristic labels, such as n-grams from the topics, relevant sentences in the corpora and noun phrases. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook.\n\n\nEval results\n------------", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #topic labeling #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use\n\n\nThe model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.", "#### Limitations and bias\n\n\nThe model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.\n\n\nTraining data\n-------------\n\n\nThe model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.\n\n\nTraining procedure\n------------------\n\n\nThe large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with other heuristic labels, such as n-grams from the topics, relevant sentences in the corpora and noun phrases. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook.\n\n\nEval results\n------------", "### BibTeX entry and citation info" ]
text2text-generation
transformers
# MyModel ## Model description This is the `BART-TL-ng` model from the paper [BART-TL: Weakly-Supervised Topic Label Generation](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf). We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works. For more details not covered here, you can read the paper or look at the open-source implementation: https://github.com/CristianViorelPopa/BART-TL-topic-label-generation. There are two models made available from the paper: * [BART-TL-all](https://huggingface.co/cristian-popa/bart-tl-all) * [BART-TL-ng](https://huggingface.co/cristian-popa/bart-tl-ng) ## Intended uses & limitations #### How to use The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM mname = "cristian-popa/bart-tl-ng" tokenizer = AutoTokenizer.from_pretrained(mname) model = AutoModelForSeq2SeqLM.from_pretrained(mname) input = "site web google search website online internet social content user" enc = tokenizer(input, return_tensors="pt", truncation=True, padding="max_length", max_length=128) outputs = model.generate( input_ids=enc.input_ids, attention_mask=enc.attention_mask, max_length=15, min_length=1, do_sample=False, num_beams=25, length_penalty=1.0, repetition_penalty=1.5 ) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # windows live messenger ``` #### Limitations and bias The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy. ## Training data The model was fine-tuned on 5 different StackExchange corpora (see https://archive.org/download/stackexchange for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here. ## Training procedure The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the [NETL](https://www.aclweb.org/anthology/C16-1091.pdf) method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the [paper](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf) or by following [this notebook](https://github.com/CristianViorelPopa/BART-TL-topic-label-generation/blob/main/notebooks/end_to_end_workflow.ipynb). ## Eval results model | Top-1 Avg. | Top-3 Avg. | Top-5 Avg. | nDCG-1 | nDCG-3 | nDCG-5 ------------|------------|------------|------------|--------|--------|------- NETL (U) | 2.66 | 2.59 | 2.50 | 0.83 | 0.85 | 0.87 NETL (S) | 2.74 | 2.57 | 2.49 | 0.88 | 0.85 | 0.88 BART-TL-all | 2.64 | 2.52 | 2.43 | 0.83 | 0.84 | 0.87 BART-TL-ng | 2.62 | 2.50 | 2.33 | 0.82 | 0.84 | 0.85 ### BibTeX entry and citation info ```bibtex @inproceedings{popa-rebedea-2021-bart, title = "{BART}-{TL}: Weakly-Supervised Topic Label Generation", author = "Popa, Cristian and Rebedea, Traian", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.eacl-main.121", pages = "1418--1425", abstract = "We propose a novel solution for assigning labels to topic models by using multiple weak labelers. The method leverages generative transformers to learn accurate representations of the most important topic terms and candidate labels. This is achieved by fine-tuning pre-trained BART models on a large number of potential labels generated by state of the art non-neural models for topic labeling, enriched with different techniques. The proposed BART-TL model is able to generate valuable and novel labels in a weakly-supervised manner and can be improved by adding other weak labelers or distant supervision on similar tasks.", } ```
{"language": ["en"], "license": "apache-2.0", "tags": ["topic labeling"], "metrics": ["ndcg"], "<!-- thumbnail": "https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png -->"}
cristian-popa/bart-tl-ng
null
[ "transformers", "pytorch", "bart", "text2text-generation", "topic labeling", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bart #text2text-generation #topic labeling #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
MyModel ======= Model description ----------------- This is the 'BART-TL-ng' model from the paper BART-TL: Weakly-Supervised Topic Label Generation. We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works. For more details not covered here, you can read the paper or look at the open-source implementation: URL There are two models made available from the paper: * BART-TL-all * BART-TL-ng Intended uses & limitations --------------------------- #### How to use The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model. #### Limitations and bias The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy. Training data ------------- The model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here. Training procedure ------------------ The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook. Eval results ------------ ### BibTeX entry and citation info
[ "#### How to use\n\n\nThe model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.", "#### Limitations and bias\n\n\nThe model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.\n\n\nTraining data\n-------------\n\n\nThe model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.\n\n\nTraining procedure\n------------------\n\n\nThe large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook.\n\n\nEval results\n------------", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #topic labeling #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use\n\n\nThe model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.", "#### Limitations and bias\n\n\nThe model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.\n\n\nTraining data\n-------------\n\n\nThe model was fine-tuned on 5 different StackExchange corpora (see URL for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.\n\n\nTraining procedure\n------------------\n\n\nThe large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the NETL method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the paper or by following this notebook.\n\n\nEval results\n------------", "### BibTeX entry and citation info" ]
translation
null
### Preprocessing 1. Normalisation and tokenisation with moses scripts 2. truecased with model docgWP.tcmodel.[LAN] and moses scripts 3. bped with model model.caesen40k.bpe and subword-nmt - Note: no prepended tag for multilinguality ### Training Data 1. Bilingual es-ca: DOGC, Wikimatrix, OpenSubtitles, JW300, GlobalVoices * Bilingual es-ca: Translations using systems trained with 1. of Oscar and Wikipedia 2. Bilingual es-en, ca-en: United Nations, Europarl, Wikimatrix, OpenSubtitles, JW300 * Bilingual es-en, ca-en: Translations using systems trained with 1. of the missing pairs - Final training data size for the ca/es-en: 44M parallel sentences - Finetuned with 1.5M real parallel data (without backtranslations) ### Model Transformer big with guided alignments. Relevant parameters: --beam-size 6 --normalize 0.6 --enc-depth 6 --dec-depth 6 --transformer-heads 8 --transformer-preprocess n --transformer-postprocess da --transformer-dropout 0.1 --label-smoothing 0.1 --dim-emb 1024 --transformer-dim-ffn 4096 --transformer-dropout-attention 0.1 --transformer-dropout-ffn 0.1 --learn-rate 0.00015 --lr-warmup 8000 --lr-decay-inv-sqrt 8000 --optimizer-params 0.9 0.998 1e-09 --clip-norm 5 --tied-embeddings --exponential-smoothing --transformer-guided-alignment-layer 1 --guided-alignment-cost mse --guided-alignment-weight 0.1 ## Evaluation ### Test set https://github.com/PLXIV/Gebiotoolkit/tree/master/gebiocorpus_v2 ### ca2en BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 47.8 (μ = 47.8 ± 0.9) chrF|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 69.9 (μ = 69.9 ± 0.7) ### es2en BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 48.9 (μ = 48.9 ± 0.9) chrF2|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 70.5 (μ = 70.5 ± 0.7)
{"language": ["ca", "es", "en"], "tags": ["translation"]}
cristinae/marian_caes2en
null
[ "translation", "ca", "es", "en", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ca", "es", "en" ]
TAGS #translation #ca #es #en #region-us
### Preprocessing 1. Normalisation and tokenisation with moses scripts 2. truecased with model docgWP.tcmodel.[LAN] and moses scripts 3. bped with model URL and subword-nmt - Note: no prepended tag for multilinguality ### Training Data 1. Bilingual es-ca: DOGC, Wikimatrix, OpenSubtitles, JW300, GlobalVoices * Bilingual es-ca: Translations using systems trained with 1. of Oscar and Wikipedia 2. Bilingual es-en, ca-en: United Nations, Europarl, Wikimatrix, OpenSubtitles, JW300 * Bilingual es-en, ca-en: Translations using systems trained with 1. of the missing pairs - Final training data size for the ca/es-en: 44M parallel sentences - Finetuned with 1.5M real parallel data (without backtranslations) ### Model Transformer big with guided alignments. Relevant parameters: --beam-size 6 --normalize 0.6 --enc-depth 6 --dec-depth 6 --transformer-heads 8 --transformer-preprocess n --transformer-postprocess da --transformer-dropout 0.1 --label-smoothing 0.1 --dim-emb 1024 --transformer-dim-ffn 4096 --transformer-dropout-attention 0.1 --transformer-dropout-ffn 0.1 --learn-rate 0.00015 --lr-warmup 8000 --lr-decay-inv-sqrt 8000 --optimizer-params 0.9 0.998 1e-09 --clip-norm 5 --tied-embeddings --exponential-smoothing --transformer-guided-alignment-layer 1 --guided-alignment-cost mse --guided-alignment-weight 0.1 ## Evaluation ### Test set URL ### ca2en BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 47.8 (μ = 47.8 ± 0.9) chrF|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 69.9 (μ = 69.9 ± 0.7) ### es2en BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 48.9 (μ = 48.9 ± 0.9) chrF2|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 70.5 (μ = 70.5 ± 0.7)
[ "### Preprocessing\n1. Normalisation and tokenisation with moses scripts\n2. truecased with model docgWP.tcmodel.[LAN] and moses scripts\n3. bped with model URL and subword-nmt\n- Note: no prepended tag for multilinguality", "### Training Data\n1. Bilingual es-ca: DOGC, Wikimatrix, OpenSubtitles, JW300, GlobalVoices\n* Bilingual es-ca: Translations using systems trained with 1. of Oscar and Wikipedia\n2. Bilingual es-en, ca-en: United Nations, Europarl, Wikimatrix, OpenSubtitles, JW300\n* Bilingual es-en, ca-en: Translations using systems trained with 1. of the missing pairs\n\n- Final training data size for the ca/es-en: 44M parallel sentences\n- Finetuned with 1.5M real parallel data (without backtranslations)", "### Model\nTransformer big with guided alignments. Relevant parameters:\n\n--beam-size 6 \n\n--normalize 0.6 \n\n--enc-depth 6 --dec-depth 6 --transformer-heads 8\n\n--transformer-preprocess n --transformer-postprocess da \n\n--transformer-dropout 0.1 \n\n--label-smoothing 0.1 \n\n--dim-emb 1024 --transformer-dim-ffn 4096 \n\n--transformer-dropout-attention 0.1 \n\n--transformer-dropout-ffn 0.1 \n\n--learn-rate 0.00015 --lr-warmup 8000 --lr-decay-inv-sqrt 8000 \n\n--optimizer-params 0.9 0.998 1e-09 \n\n--clip-norm 5 \n\n--tied-embeddings \n\n--exponential-smoothing \n\n--transformer-guided-alignment-layer 1 --guided-alignment-cost mse --guided-alignment-weight 0.1", "## Evaluation", "### Test set\n\nURL", "### ca2en\n BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 47.8 (μ = 47.8 ± 0.9)\n\n chrF|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 69.9 (μ = 69.9 ± 0.7)", "### es2en\nBLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 48.9 (μ = 48.9 ± 0.9) \n\nchrF2|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 70.5 (μ = 70.5 ± 0.7)" ]
[ "TAGS\n#translation #ca #es #en #region-us \n", "### Preprocessing\n1. Normalisation and tokenisation with moses scripts\n2. truecased with model docgWP.tcmodel.[LAN] and moses scripts\n3. bped with model URL and subword-nmt\n- Note: no prepended tag for multilinguality", "### Training Data\n1. Bilingual es-ca: DOGC, Wikimatrix, OpenSubtitles, JW300, GlobalVoices\n* Bilingual es-ca: Translations using systems trained with 1. of Oscar and Wikipedia\n2. Bilingual es-en, ca-en: United Nations, Europarl, Wikimatrix, OpenSubtitles, JW300\n* Bilingual es-en, ca-en: Translations using systems trained with 1. of the missing pairs\n\n- Final training data size for the ca/es-en: 44M parallel sentences\n- Finetuned with 1.5M real parallel data (without backtranslations)", "### Model\nTransformer big with guided alignments. Relevant parameters:\n\n--beam-size 6 \n\n--normalize 0.6 \n\n--enc-depth 6 --dec-depth 6 --transformer-heads 8\n\n--transformer-preprocess n --transformer-postprocess da \n\n--transformer-dropout 0.1 \n\n--label-smoothing 0.1 \n\n--dim-emb 1024 --transformer-dim-ffn 4096 \n\n--transformer-dropout-attention 0.1 \n\n--transformer-dropout-ffn 0.1 \n\n--learn-rate 0.00015 --lr-warmup 8000 --lr-decay-inv-sqrt 8000 \n\n--optimizer-params 0.9 0.998 1e-09 \n\n--clip-norm 5 \n\n--tied-embeddings \n\n--exponential-smoothing \n\n--transformer-guided-alignment-layer 1 --guided-alignment-cost mse --guided-alignment-weight 0.1", "## Evaluation", "### Test set\n\nURL", "### ca2en\n BLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 47.8 (μ = 47.8 ± 0.9)\n\n chrF|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 69.9 (μ = 69.9 ± 0.7)", "### es2en\nBLEU|#:1|bs:1000|rs:12345|c:mixed|e:no|tok:13a|s:exp|v:2.0.0 = 48.9 (μ = 48.9 ± 0.9) \n\nchrF2|#:1|bs:1000|rs:12345|c:mixed|e:yes|nc:6|nw:0|s:no|v:2.0.0 = 70.5 (μ = 70.5 ± 0.7)" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec-timit This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec-timit", "results": []}]}
cristinakuo/wav2vec-timit
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec-timit This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
[ "# wav2vec-timit\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec-timit\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-latino40 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8795 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.6846 | 0.83 | 100 | 2.9086 | 1.0 | | 2.8686 | 1.67 | 200 | 2.8922 | 1.0 | | 2.8805 | 2.5 | 300 | 2.9326 | 1.0 | | 2.8613 | 3.33 | 400 | 2.8698 | 1.0 | | 2.8643 | 4.17 | 500 | 2.9027 | 1.0 | | 2.8688 | 5.0 | 600 | 2.9544 | 1.0 | | 2.8689 | 5.83 | 700 | 2.8914 | 1.0 | | 2.8558 | 6.67 | 800 | 2.8762 | 1.0 | | 2.8537 | 7.5 | 900 | 2.8982 | 1.0 | | 2.8522 | 8.33 | 1000 | 2.8820 | 1.0 | | 2.8468 | 9.17 | 1100 | 2.8760 | 1.0 | | 2.8454 | 10.0 | 1200 | 2.8795 | 1.0 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-latino40", "results": []}]}
cristinakuo/wav2vec2-latino40
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-latino40 ================= This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.8795 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-MiniLM-L-12-v2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-MiniLM-L-2-v2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-MiniLM-L-4-v2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-MiniLM-L-6-v2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-TinyBERT-L-2-v2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-TinyBERT-L-2
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-TinyBERT-L-4
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-TinyBERT-L-6
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/ms-marco-electra-base
null
[ "transformers", "pytorch", "electra", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #electra #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Cross-Encoder for MS Marco ========================== This model was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See URL Retrieve & Re-rank for more details. The training code is available here: URL Training MS Marco Usage with Transformers ----------------------- Usage with SentenceTransformers ------------------------------- The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this: Performance ----------- In the following table, we provide various pre-trained Cross-Encoders together with their performance on the TREC Deep Learning 2019 and the MS Marco Passage Reranking dataset. Note: Runtime was computed on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #electra #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS MARCO - EN-DE This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html). The training code is available in this repository, see `train_script.py`. ## Usage with SentenceTransformers When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) query = 'How many people live in Berlin?' docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'] pairs = [(query, doc) for doc in docs] scores = model.predict(pairs) ``` ## Usage with Transformers With the transformers library, you can use the model like this: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Performance The performance was evaluated on three datasets: - **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47. - **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10. - **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27. We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search. | Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec | | ------------- |:-------------:| :-----: | :---: | :----: | | BM25 | 45.46 | - | 35.85 | -| | **Cross-Encoder Re-Rankers** | | | | | [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 | | [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 | | [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 | | [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 | | **Bi-Encoders (re-ranking)** | | | | | [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 | | [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 | | [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 | | [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 | Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/msmarco-MiniLM-L12-en-de-v1
null
[ "transformers", "pytorch", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Cross-Encoder for MS MARCO - EN-DE ================================== This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: See URL Retrieve & Re-rank. The training code is available in this repository, see 'train\_script.py'. Usage with SentenceTransformers ------------------------------- When you have SentenceTransformers installed, you can use the model like this: Usage with Transformers ----------------------- With the transformers library, you can use the model like this: Performance ----------- The performance was evaluated on three datasets: * TREC-DL19 EN-EN: The original TREC 2019 Deep Learning Track: Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47. * TREC-DL19 DE-EN: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10. * GermanDPR DE-DE: The GermanDPR dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27. We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search. Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# Cross-Encoder for MS MARCO - EN-DE This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html). The training code is available in this repository, see `train_script.py`. ## Usage with SentenceTransformers When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) query = 'How many people live in Berlin?' docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'] pairs = [(query, doc) for doc in docs] scores = model.predict(pairs) ``` ## Usage with Transformers With the transformers library, you can use the model like this: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Performance The performance was evaluated on three datasets: - **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47. - **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10. - **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27. We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search. | Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec | | ------------- |:-------------:| :-----: | :---: | :----: | | BM25 | 45.46 | - | 35.85 | -| | **Cross-Encoder Re-Rankers** | | | | | [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 | | [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 | | [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 | | [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 | | **Bi-Encoders (re-ranking)** | | | | | [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 | | [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 | | [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 | | [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 | Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
{"license": "apache-2.0"}
cross-encoder/msmarco-MiniLM-L6-en-de-v1
null
[ "transformers", "pytorch", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Cross-Encoder for MS MARCO - EN-DE ================================== This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the MS Marco Passage Ranking task. The model can be used for Information Retrieval: See URL Retrieve & Re-rank. The training code is available in this repository, see 'train\_script.py'. Usage with SentenceTransformers ------------------------------- When you have SentenceTransformers installed, you can use the model like this: Usage with Transformers ----------------------- With the transformers library, you can use the model like this: Performance ----------- The performance was evaluated on three datasets: * TREC-DL19 EN-EN: The original TREC 2019 Deep Learning Track: Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47. * TREC-DL19 DE-EN: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10. * GermanDPR DE-DE: The GermanDPR dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27. We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search. Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-MiniLM2-L6-H768') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-MiniLM2-L6-H768') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["MiniLMv2"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-MiniLM2-L6-H768
null
[ "transformers", "pytorch", "roberta", "text-classification", "MiniLMv2", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #MiniLMv2 #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #MiniLMv2 #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["deberta-base-base"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-deberta-base
null
[ "transformers", "pytorch", "deberta", "text-classification", "deberta-base-base", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta #text-classification #deberta-base-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #deberta #text-classification #deberta-base-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 92.38 - Accuracy on MNLI mismatched set: 90.04 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["microsoft/deberta-v3-base"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-deberta-v3-base
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "microsoft/deberta-v3-base", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-base ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 92.38 - Accuracy on MNLI mismatched set: 90.04 For futher evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-base", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 92.38\r\n- Accuracy on MNLI mismatched set: 90.04\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-base", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 92.38\r\n- Accuracy on MNLI mismatched set: 90.04\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 92.20 - Accuracy on MNLI mismatched set: 90.49 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-large') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["microsoft/deberta-v3-large"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-deberta-v3-large
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "microsoft/deberta-v3-large", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-large #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-large ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 92.20 - Accuracy on MNLI mismatched set: 90.49 For futher evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-large", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 92.20\r\n- Accuracy on MNLI mismatched set: 90.49\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-large #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-large", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 92.20\r\n- Accuracy on MNLI mismatched set: 90.49\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.65 - Accuracy on MNLI mismatched set: 87.55 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-small') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-small') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-small') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-small') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["microsoft/deberta-v3-small"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-deberta-v3-small
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "microsoft/deberta-v3-small", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-small #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-small ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.65 - Accuracy on MNLI mismatched set: 87.55 For futher evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-small", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 91.65\r\n- Accuracy on MNLI mismatched set: 87.55\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-small #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-small", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 91.65\r\n- Accuracy on MNLI mismatched set: 87.55\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["microsoft/deberta-v3-xsmall"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-deberta-v3-xsmall
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "microsoft/deberta-v3-xsmall", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-xsmall #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-xsmall ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-xsmall", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 91.64\r\n- Accuracy on MNLI mismatched set: 87.77\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #microsoft/deberta-v3-xsmall #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class. This model is based on microsoft/deberta-v3-xsmall", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\r\n- Accuracy on SNLI-test dataset: 91.64\r\n- Accuracy on MNLI mismatched set: 87.77\r\n\nFor futher evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-distilroberta-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-distilroberta-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-distilroberta-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-distilroberta-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["distilroberta-base"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-distilroberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "distilroberta-base", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #roberta #text-classification #distilroberta-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #distilroberta-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
zero-shot-classification
transformers
# Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-roberta-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-roberta-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-roberta-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-roberta-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
{"language": "en", "license": "apache-2.0", "tags": ["roberta-base"], "datasets": ["multi_nli", "snli"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
cross-encoder/nli-roberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "roberta-base", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #roberta #text-classification #roberta-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Natural Language Inference This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data The model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see URL - Pretrained Cross-Encoder. ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ## Zero-Shot Classification This model can also be used for zero-shot-classification:
[ "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #roberta-base #zero-shot-classification #en #dataset-multi_nli #dataset-snli #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Natural Language Inference\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThe model was trained on the SNLI and MultiNLI datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.", "## Performance\nFor evaluation results, see URL - Pretrained Cross-Encoder.", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):", "## Zero-Shot Classification\nThis model can also be used for zero-shot-classification:" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task. ## Performance For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html]. ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')]) #e.g. scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')]) ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = torch.nn.functional.sigmoid(model(**features).logits) print(scores) ```
{"license": "apache-2.0"}
cross-encoder/qnli-distilroberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "arxiv:1804.07461", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1804.07461" ]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #arxiv-1804.07461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task. ## Performance For performance results of this model, see [URL Pre-trained Cross-Encoder][URL ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library):
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nGiven a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task.", "## Performance\nFor performance results of this model, see [URL Pre-trained Cross-Encoder][URL", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #arxiv-1804.07461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nGiven a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task.", "## Performance\nFor performance results of this model, see [URL Pre-trained Cross-Encoder][URL", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):" ]