modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 12:28:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
543 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 12:27:52
card
stringlengths
11
1.01M
EMBO/sd-panelization
EMBO
2022-03-27T13:21:35Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "dataset:EMBO/sd-nlp", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - english thumbnail: tags: - token classification - license: agpl-3.0 datasets: - EMBO/sd-nlp metrics: - --- # sd-panelization ## Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels. Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments. ## Intended uses & limitations #### How to use The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org). To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1.""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panelization') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ``` #### Limitations and bias The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained for token classification using the [`EMBO/sd-nlp PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples. ## Training procedure The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Model fine-tuned: EMBO/bio-lm - Tokenizer vocab size: 50265 - Training data: EMBO/sd-nlp - Dataset configuration: PANELIZATION - TTraining with 2175 examples. - Evaluating on 622 examples. - Training on 2 features: `O`, `B-PANEL_START` - Epochs: 1.3 - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 ## Eval results Testing on 1802 examples from test set with `sklearn.metrics`: ``` precision recall f1-score support PANEL_START 0.89 0.95 0.92 5427 micro avg 0.89 0.95 0.92 5427 macro avg 0.89 0.95 0.92 5427 weighted avg 0.89 0.95 0.92 5427 ```
YXHugging/autotrain-xlm-roberta-base-reviews-672119798
YXHugging
2022-03-27T12:58:03Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain", "unk", "dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-26T21:07:59Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - YXHugging/autotrain-data-xlm-roberta-base-reviews co2_eq_emissions: 1013.8825767332373 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 672119798 - CO2 Emissions (in grams): 1013.8825767332373 ## Validation Metrics - Loss: 0.9646632075309753 - Accuracy: 0.5789333333333333 - Macro F1: 0.5775792001871465 - Micro F1: 0.5789333333333333 - Weighted F1: 0.5775792001871465 - Macro Precision: 0.5829444191847423 - Micro Precision: 0.5789333333333333 - Weighted Precision: 0.5829444191847424 - Macro Recall: 0.5789333333333333 - Micro Recall: 0.5789333333333333 - Weighted Recall: 0.5789333333333333 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119798 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
YXHugging/autotrain-xlm-roberta-base-reviews-672119797
YXHugging
2022-03-27T12:55:19Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain", "unk", "dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-26T21:05:03Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - YXHugging/autotrain-data-xlm-roberta-base-reviews co2_eq_emissions: 1019.0229633198007 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 672119797 - CO2 Emissions (in grams): 1019.0229633198007 ## Validation Metrics - Loss: 0.9898674488067627 - Accuracy: 0.5688083333333334 - Macro F1: 0.5640966271895913 - Micro F1: 0.5688083333333334 - Weighted F1: 0.5640966271895913 - Macro Precision: 0.5673737438011194 - Micro Precision: 0.5688083333333334 - Weighted Precision: 0.5673737438011194 - Macro Recall: 0.5688083333333334 - Micro Recall: 0.5688083333333334 - Weighted Recall: 0.5688083333333334 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119797 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
yy642/bert-base-uncased-finetuned-mnli-512-10
yy642
2022-03-27T11:06:39Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-27T01:55:50Z
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-mnli-512-10 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.9355947399880454 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mnli-512-10 This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-512-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-512-5) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4991 - Accuracy: 0.9356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0514 | 1.0 | 16363 | 0.4557 | 0.9265 | | 0.0369 | 2.0 | 32726 | 0.4548 | 0.9323 | | 0.0249 | 3.0 | 49089 | 0.4376 | 0.9320 | | 0.0197 | 4.0 | 65452 | 0.4991 | 0.9356 | | 0.0135 | 5.0 | 81815 | 0.5424 | 0.9341 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
PaddyP/distilbert-base-uncased-finetuned-emotion
PaddyP
2022-03-27T07:06:37Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-27T06:12:25Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2302 - Accuracy: 0.922 - F1: 0.9218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3344 | 0.903 | 0.9004 | | No log | 2.0 | 500 | 0.2302 | 0.922 | 0.9218 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 2.0.0 - Tokenizers 0.10.3
imyday/distilbert-base-uncased-finetuned-emotion
imyday
2022-03-27T06:59:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-27T03:09:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9233039604362318 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2282 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8344 | 1.0 | 250 | 0.3317 | 0.8995 | 0.8953 | | 0.2606 | 2.0 | 500 | 0.2282 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
scasutt/wav2vec2-base_toy_train_data_random_noise
scasutt
2022-03-27T02:27:39Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-27T00:14:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_noise results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_noise This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0909 - Wer: 0.7351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.128 | 2.1 | 250 | 3.5052 | 1.0 | | 3.0423 | 4.2 | 500 | 2.9312 | 1.0 | | 1.4109 | 6.3 | 750 | 1.2618 | 0.8915 | | 0.9132 | 8.4 | 1000 | 1.1074 | 0.8436 | | 0.7146 | 10.5 | 1250 | 1.0397 | 0.7876 | | 0.5418 | 12.6 | 1500 | 1.0359 | 0.7662 | | 0.4649 | 14.7 | 1750 | 1.0469 | 0.7467 | | 0.4127 | 16.8 | 2000 | 1.0655 | 0.7404 | | 0.3881 | 18.9 | 2250 | 1.0909 | 0.7351 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-base_toy_train_data_random_noise_0.1
scasutt
2022-03-27T00:13:42Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-26T22:03:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_noise_0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_noise_0.1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9263 - Wer: 0.7213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1296 | 2.1 | 250 | 3.5088 | 1.0 | | 3.0728 | 4.2 | 500 | 3.1694 | 1.0 | | 1.8686 | 6.3 | 750 | 1.3414 | 0.9321 | | 1.1241 | 8.4 | 1000 | 1.0196 | 0.8321 | | 0.8704 | 10.5 | 1250 | 0.9387 | 0.7962 | | 0.6734 | 12.6 | 1500 | 0.9309 | 0.7640 | | 0.5832 | 14.7 | 1750 | 0.9329 | 0.7346 | | 0.5207 | 16.8 | 2000 | 0.9060 | 0.7247 | | 0.4857 | 18.9 | 2250 | 0.9263 | 0.7213 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
huggingtweets/mkobach-naval-shaneaparrish
huggingtweets
2022-03-27T00:07:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-27T00:04:05Z
--- language: en thumbnail: http://www.huggingtweets.com/mkobach-naval-shaneaparrish/1648339620049/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1253758424292171778/48gD7Hne_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Shane Parrish & Naval</div> <div style="text-align: center; font-size: 14px;">@mkobach-naval-shaneaparrish</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matthew Kobach & Shane Parrish & Naval. | Data | Matthew Kobach | Shane Parrish | Naval | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3197 | 3249 | | Retweets | 135 | 102 | 181 | | Short tweets | 444 | 147 | 617 | | Tweets kept | 2669 | 2948 | 2451 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17cy2tt4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mkobach-naval-shaneaparrish's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mkobach-naval-shaneaparrish') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
scasutt/wav2vec2-base_toy_train_data_masked_audio
scasutt
2022-03-26T22:02:44Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-26T14:57:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_masked_audio results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_masked_audio This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1950 - Wer: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1287 | 2.1 | 250 | 3.4581 | 1.0 | | 3.0259 | 4.2 | 500 | 2.8099 | 0.9999 | | 1.4881 | 6.3 | 750 | 1.2929 | 0.8950 | | 0.9665 | 8.4 | 1000 | 1.1675 | 0.8346 | | 0.7614 | 10.5 | 1250 | 1.1388 | 0.8003 | | 0.5858 | 12.6 | 1500 | 1.1510 | 0.7672 | | 0.5005 | 14.7 | 1750 | 1.1606 | 0.7532 | | 0.4486 | 16.8 | 2000 | 1.1571 | 0.7427 | | 0.4224 | 18.9 | 2250 | 1.1950 | 0.7340 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
Mnauel/wav2vec2-base-finetuned-ks
Mnauel
2022-03-26T20:53:27Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-12T10:51:33Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5766 - Accuracy: 0.8308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.7247 | 0.7462 | | No log | 2.0 | 14 | 0.6844 | 0.7615 | | 0.4279 | 3.0 | 21 | 0.7254 | 0.7462 | | 0.4279 | 4.0 | 28 | 0.5891 | 0.8 | | 0.4279 | 5.0 | 35 | 0.6991 | 0.7462 | | 0.4478 | 6.0 | 42 | 0.6579 | 0.7615 | | 0.4478 | 7.0 | 49 | 0.6164 | 0.8 | | 0.4478 | 8.0 | 56 | 0.6191 | 0.8077 | | 0.4194 | 9.0 | 63 | 0.5766 | 0.8308 | | 0.4194 | 10.0 | 70 | 0.5704 | 0.8154 | | 0.4194 | 11.0 | 77 | 0.6518 | 0.8 | | 0.3833 | 12.0 | 84 | 0.6190 | 0.8077 | | 0.3833 | 13.0 | 91 | 0.5693 | 0.8231 | | 0.3833 | 14.0 | 98 | 0.5628 | 0.8231 | | 0.3607 | 15.0 | 105 | 0.5741 | 0.8154 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.10.3
dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
dannyvas23
2022-03-26T19:22:14Z
25
1
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "sentiment", "emotion", "es", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-26T17:19:56Z
--- license: afl-3.0 language: "es" tags: - generated_from_trainer - sentiment - emotion widget: - text: "La vida no merece la pena" example_title: "Ejemplo 1" - text: "Para vivir así lo mejor es estar muerto" example_title: "Ejemplo 2" - text: "me siento triste por no poder viajar" example_title: "Ejemplo 3" - text: "Quiero terminar con todo" example_title: "Ejemplo 4" - text: "Disfruto de la vista" example_title: "Ejemplo 5" metrics: - accuracy model-index: - name: electricidad-small-discriminator-finetuned-clasificacion-texto-suicida results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electricidad-small-discriminator-finetuned-clasificacion-texto-suicida This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0458 - Accuracy: 0.9916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Validation Loss | Accuracy | |:-------------:|:-----:|:---------------:|:--------:| | 0.161100 | 1.0 | 0.133057 | 0.952718 | | 0.134500 | 2.0 | 0.110966 | 0.960804 | | 0.108500 | 3.0 | 0.086417 | 0.970835 | | 0.099400 | 4.0 | 0.073618 | 0.974856 | | 0.090500 | 5.0 | 0.065231 | 0.979629 | | 0.080700 | 6.0 | 0.060849 | 0.982324 | | 0.069200 | 7.0 | 0.054718 | 0.986125 | | 0.060400 | 8.0 | 0.051153 | 0.985948 | | 0.048200 | 9.0 | 0.045747 | 0.989748 | | 0.045500 | 10.0 | 0.049992 | 0.988069 | | 0.043400 | 11.0 | 0.046325 | 0.990234 | | 0.034300 | 12.0 | 0.050746 | 0.989792 | | 0.032900 | 13.0 | 0.043434 | 0.991737 | | 0.028400 | 14.0 | 0.045003 | 0.991869 | | 0.022300 | 15.0 | 0.045819 | 0.991648 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review
dannyvas23
2022-03-26T17:12:23Z
24
2
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "sentiment", "emotion", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-21T19:26:40Z
--- language: "es" tags: - generated_from_trainer - sentiment - emotion widget: - text: "no me gusta esta vida." example_title: "Ejemplo 1" - text: "odio estar ahi" example_title: "Ejemplo 2" - text: "me siento triste por no poder viajar" example_title: "Ejemplo 3" metrics: - accuracy model-index: - name: clasificacion-texto-suicida-finetuned-amazon-review results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificacion-texto-suicida-finetuned-amazon-review This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1546 - Accuracy: 0.9488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bigmorning/distilgpt2-500e
bigmorning
2022-03-26T16:37:42Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-26T16:31:57Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilgpt2-500e results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-500e This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
bigmorning/distilbert1000e
bigmorning
2022-03-26T15:31:46Z
5
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-26T15:27:21Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert1000e results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert1000e This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
uw-madison/nystromformer-1024
uw-madison
2022-03-26T14:58:18Z
44
0
transformers
[ "transformers", "pytorch", "nystromformer", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-23T18:56:40Z
Nystromformer for sequence length 1024 trained on WikiText-103 v1 for 150 epochs.
bigmorning/distilbert500e
bigmorning
2022-03-26T14:54:50Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-26T14:48:24Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert500e results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert500e This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
rahulacj/mbart-large-cc25-finetuned-hi-to-en
rahulacj
2022-03-26T14:06:02Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-18T10:19:49Z
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: mbart-large-cc25-finetuned-hi-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-finetuned-hi-to-en This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4710 - Bleu: 16.6154 - Gen Len: 42.6244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.5705 | 1.0 | 3955 | 1.4858 | 14.8984 | 47.6759 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
zuppif/versioning-test
zuppif
2022-03-26T13:35:30Z
0
0
null
[ "region:us" ]
null
2022-03-26T13:34:47Z
| | uid | hidden_size | |---:|:------------------------------------------------------------------------------------------------------------------------|--------------:| | 0 | [e87a4e028b11ec7bf770c6f3ab5c6349](https://huggingface.co/zuppif/versioning-test/tree/e87a4e028b11ec7bf770c6f3ab5c6349) | 8 | | 1 | [48f2a327cfb7cb0f9b519d9abf73a9be](https://huggingface.co/zuppif/versioning-test/tree/48f2a327cfb7cb0f9b519d9abf73a9be) | 16 | | 2 | [1c9d18df9ec06b5f7e2f49b2ef1cb826](https://huggingface.co/zuppif/versioning-test/tree/1c9d18df9ec06b5f7e2f49b2ef1cb826) | 32 |
Roshan777/finetuning-sentiment-model-300-samples
Roshan777
2022-03-26T12:54:48Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-24T13:02:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-300-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.6833333333333333 - name: F1 type: f1 value: 0.6153846153846154 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-300-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6567 - Accuracy: 0.6833 - F1: 0.6154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Mr-Wick/Roberta
Mr-Wick
2022-03-26T12:39:55Z
3
0
transformers
[ "transformers", "tf", "roberta", "question-answering", "generated_from_keras_callback", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-23T16:08:46Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: Roberta results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16476, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
donyd/distilbert-finetuned-imdb
donyd
2022-03-26T10:29:06Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-26T00:32:31Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: donyd/distilbert-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # donyd/distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8432 - Validation Loss: 2.6247 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8432 | 2.6247 | 0 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.7.0 - Tokenizers 0.11.6
lighteternal/wav2vec2-large-xlsr-53-greek
lighteternal
2022-03-26T10:12:37Z
2,071
8
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "speech", "xlsr-fine-tuning-week", "el", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: el datasets: - common_voice tags: - audio - hf-asr-leaderboard - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Greek by Lighteternal results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: CommonVoice (EL), CSS10 (EL) type: CCS10 + mozilla-foundation/common_voice_7_0 args: el metrics: - name: Test WER type: wer value: 10.497628 - name: Test CER type: cer value: 2.875260 --- # Greek (el) version of the XLSR-Wav2Vec2 automatic speech recognition (ASR) model ### By the Hellenic Army Academy and the Technical University of Crete * language: el * licence: apache-2.0 * dataset: CommonVoice (EL), 364MB: https://commonvoice.mozilla.org/el/datasets + CSS10 (EL), 1.22GB: https://github.com/Kyubyong/css10 * model: XLSR-Wav2Vec2, trained for 50 epochs * metrics: Word Error Rate (WER) ## Model description UPDATE: We repeated the fine-tuning process using an additional 1.22GB dataset from CSS10. Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on the English ASR dataset LibriSpeech, Facebook AI presented XLSR-Wav2Vec2. XLSR stands for cross-lingual speech representations and refers to XLSR-Wav2Vec2`s ability to learn speech representations that are useful across multiple languages. Similar to Wav2Vec2, XLSR-Wav2Vec2 learns powerful speech representations from hundreds of thousands of hours of speech in more than 50 languages of unlabeled speech. Similar, to BERT's masked language modeling, the model learns contextualized speech representations by randomly masking feature vectors before passing them to a transformer network. This model was trained for 50 epochs on a single NVIDIA RTX 3080, for aprox. 8hrs. ## How to use for inference: For live demo, make sure that speech files are sampled at 16kHz. Instructions to test on CommonVoice extracts are provided in the ASR_Inference.ipynb. Snippet also available below: ```python #!/usr/bin/env python # coding: utf-8 # Loading dependencies and defining preprocessing functions from transformers import Wav2Vec2ForCTC from transformers import Wav2Vec2Processor from datasets import load_dataset, load_metric import re import torchaudio import librosa import numpy as np from datasets import load_dataset, load_metric import torch chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]' def remove_special_characters(batch): batch["text"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["text"] return batch def resample(batch): batch["speech"] = librosa.resample(np.asarray(batch["speech"]), 48_000, 16_000) batch["sampling_rate"] = 16_000 return batch def prepare_dataset(batch): # check that all files have the correct sampling rate assert ( len(set(batch["sampling_rate"])) == 1 ), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}." batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values with processor.as_target_processor(): batch["labels"] = processor(batch["target_text"]).input_ids return batch # Loading model and dataset processor model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek").to("cuda") processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek") # Preparing speech dataset to be suitable for inference common_voice_test = load_dataset("common_voice", "el", split="test") common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"]) common_voice_test = common_voice_test.map(remove_special_characters, remove_columns=["sentence"]) common_voice_test = common_voice_test.map(speech_file_to_array_fn, remove_columns=common_voice_test.column_names) common_voice_test = common_voice_test.map(resample, num_proc=8) common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names, batch_size=8, num_proc=8, batched=True) # Loading test dataset common_voice_test_transcription = load_dataset("common_voice", "el", split="test") #Performing inference on a random sample. Change the "example" value to try inference on different CommonVoice extracts example = 123 input_dict = processor(common_voice_test["input_values"][example], return_tensors="pt", sampling_rate=16_000, padding=True) logits = model(input_dict.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) print("Prediction:") print(processor.decode(pred_ids[0])) # πού θέλεις να πάμε ρώτησε φοβισμένα ο βασιλιάς print("\\\\ Reference:") print(common_voice_test_transcription["sentence"][example].lower()) # πού θέλεις να πάμε; ρώτησε φοβισμένα ο βασιλιάς. ``` ## Evaluation The model can be evaluated as follows on the Greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 10.497628 % ### How to use for training: Instructions and code to replicate the process are provided in the Fine_Tune_XLSR_Wav2Vec2_on_Greek_ASR_with_🤗_Transformers.ipynb notebook. ## Metrics | Metric | Value | | ----------- | ----------- | | Training Loss | 0.0545 | | Validation Loss | 0.1661 | | CER on CommonVoice Test (%) &ast;| 2.8753 | | WER on CommonVoice Test (%) &ast;| 10.4976 | &ast; Reference transcripts were lower-cased and striped of punctuation and special characters. ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) Based on the tutorial of Patrick von Platen: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 Original colab notebook here: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=V7YOT2mnUiea
scasutt/wav2vec2-base_toy_train_data_augmented
scasutt
2022-03-26T10:09:16Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-26T07:36:21Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_augmented results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_augmented This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0238 - Wer: 0.6969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.12 | 1.05 | 250 | 3.3998 | 0.9982 | | 3.0727 | 2.1 | 500 | 3.1261 | 0.9982 | | 1.9729 | 3.15 | 750 | 1.4868 | 0.9464 | | 1.3213 | 4.2 | 1000 | 1.2598 | 0.8833 | | 1.0508 | 5.25 | 1250 | 1.0014 | 0.8102 | | 0.8483 | 6.3 | 1500 | 0.9475 | 0.7944 | | 0.7192 | 7.35 | 1750 | 0.9493 | 0.7686 | | 0.6447 | 8.4 | 2000 | 0.9872 | 0.7573 | | 0.6064 | 9.45 | 2250 | 0.9587 | 0.7447 | | 0.5384 | 10.5 | 2500 | 0.9332 | 0.7320 | | 0.4985 | 11.55 | 2750 | 0.9926 | 0.7315 | | 0.4643 | 12.6 | 3000 | 1.0008 | 0.7292 | | 0.4565 | 13.65 | 3250 | 0.9522 | 0.7171 | | 0.449 | 14.7 | 3500 | 0.9685 | 0.7140 | | 0.4307 | 15.75 | 3750 | 1.0080 | 0.7077 | | 0.4239 | 16.81 | 4000 | 0.9950 | 0.7023 | | 0.389 | 17.86 | 4250 | 1.0260 | 0.7007 | | 0.3471 | 18.91 | 4500 | 1.0012 | 0.6966 | | 0.3276 | 19.96 | 4750 | 1.0238 | 0.6969 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
calebcsjm/reversed_harrypotter_generation
calebcsjm
2022-03-26T05:02:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-25T20:58:10Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: reversed_harrypotter_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reversed_harrypotter_generation This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
nikhedward/t5-small-finetuned-multi-news
nikhedward
2022-03-26T04:31:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:multi_news", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-26T03:43:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - multi_news metrics: - rouge model-index: - name: t5-small-finetuned-multi-news results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: multi_news type: multi_news args: default metrics: - name: Rouge1 type: rouge value: 14.5549 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-multi-news This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the multi_news dataset. It achieves the following results on the evaluation set: - Loss: 2.7775 - Rouge1: 14.5549 - Rouge2: 4.5934 - Rougel: 11.1178 - Rougelsum: 12.8964 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.0211 | 1.0 | 1405 | 2.7775 | 14.5549 | 4.5934 | 11.1178 | 12.8964 | 19.0 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
huggingtweets/atarifounders
huggingtweets
2022-03-26T03:45:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-10T18:31:26Z
--- language: en thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div> <div style="text-align: center; font-size: 14px;">@atarifounders</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from koala/claw/soppy. | Data | koala/claw/soppy | | --- | --- | | Tweets downloaded | 3239 | | Retweets | 129 | | Short tweets | 883 | | Tweets kept | 2227 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/atarifounders') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bigmorning/try_tpu_distilbert
bigmorning
2022-03-26T03:44:06Z
5
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-26T03:25:38Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: try_tpu_distilbert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # try_tpu_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
huggingtweets/_stevenshoe-mkobach
huggingtweets
2022-03-25T22:23:51Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-25T22:08:01Z
--- language: en thumbnail: http://www.huggingtweets.com/_stevenshoe-mkobach/1648247026634/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1505053150478229505/wAa1lc04_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Steven Shoemaker</div> <div style="text-align: center; font-size: 14px;">@_stevenshoe-mkobach</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matthew Kobach & Steven Shoemaker. | Data | Matthew Kobach | Steven Shoemaker | | --- | --- | --- | | Tweets downloaded | 3242 | 1319 | | Retweets | 136 | 56 | | Short tweets | 443 | 125 | | Tweets kept | 2663 | 1138 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/48je6le3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_stevenshoe-mkobach's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oih18qf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oih18qf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_stevenshoe-mkobach') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/huggingpuppy
huggingtweets
2022-03-25T18:42:54Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-25T18:41:40Z
--- language: en thumbnail: http://www.huggingtweets.com/huggingpuppy/1648233768787/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1504530325526900756/QOTZak3q_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">hug. (INGROUP INTERN)</div> <div style="text-align: center; font-size: 14px;">@huggingpuppy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from hug. (INGROUP INTERN). | Data | hug. (INGROUP INTERN) | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 97 | | Short tweets | 816 | | Tweets kept | 2336 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wq0kiqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @huggingpuppy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/huggingpuppy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
pinecone/msmarco-distilbert-base-tas-b-covid
pinecone
2022-03-25T18:30:52Z
152
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-25T18:20:41Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 6250 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MarginMSELoss.MarginMSELoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6250, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
patrickvonplaten/deberta_amazon_reviews_v1
patrickvonplaten
2022-03-25T17:57:32Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-25T10:12:59Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta_amazon_reviews_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta_amazon_reviews_v1 This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 2 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-base_toy_train_data_augment_0.1
scasutt
2022-03-25T17:44:40Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-25T14:40:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_augment_0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_augment_0.1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3786 - Wer: 0.9954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1342 | 1.05 | 250 | 3.3901 | 0.9954 | | 3.0878 | 2.1 | 500 | 3.4886 | 0.9954 | | 3.0755 | 3.15 | 750 | 3.4616 | 0.9954 | | 3.0891 | 4.2 | 1000 | 3.5316 | 0.9954 | | 3.0724 | 5.25 | 1250 | 3.2608 | 0.9954 | | 3.0443 | 6.3 | 1500 | 3.3881 | 0.9954 | | 3.0421 | 7.35 | 1750 | 3.4507 | 0.9954 | | 3.0448 | 8.4 | 2000 | 3.4525 | 0.9954 | | 3.0455 | 9.45 | 2250 | 3.3342 | 0.9954 | | 3.0425 | 10.5 | 2500 | 3.3385 | 0.9954 | | 3.0457 | 11.55 | 2750 | 3.4411 | 0.9954 | | 3.0375 | 12.6 | 3000 | 3.4459 | 0.9954 | | 3.0459 | 13.65 | 3250 | 3.3883 | 0.9954 | | 3.0455 | 14.7 | 3500 | 3.3417 | 0.9954 | | 3.0524 | 15.75 | 3750 | 3.3908 | 0.9954 | | 3.0443 | 16.81 | 4000 | 3.3932 | 0.9954 | | 3.0446 | 17.86 | 4250 | 3.4052 | 0.9954 | | 3.0412 | 18.91 | 4500 | 3.3776 | 0.9954 | | 3.0358 | 19.96 | 4750 | 3.3786 | 0.9954 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
manandey/wav2vec2-large-xlsr-_irish
manandey
2022-03-25T16:53:49Z
11
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "ga", "dataset:common_voice", "doi:10.57967/hf/0190", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ga datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week - hf-asr-leaderboard license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Irish by Manan Dey results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ga-IE type: common_voice args: ga-IE metrics: - name: Test WER type: wer value: 42.34 --- # Wav2Vec2-Large-XLSR-53-Irish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Irish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ga-IE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 42.34% ## Training The Common Voice `train`, `validation` datasets were used for training.
ianMconversica/autotrain-parrot_finetune_v1-667919695
ianMconversica
2022-03-25T15:41:11Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain", "unk", "dataset:McIan91/autotrain-data-parrot_finetune_v1", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-25T12:27:52Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - McIan91/autotrain-data-parrot_finetune_v1 co2_eq_emissions: 207.64739623144084 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 667919695 - CO2 Emissions (in grams): 207.64739623144084 ## Validation Metrics - Loss: 0.06461456418037415 - Rouge1: 70.5184 - Rouge2: 66.9204 - RougeL: 70.4464 - RougeLsum: 70.4705 - Gen Len: 18.5385 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/McIan91/autotrain-parrot_finetune_v1-667919695 ```
Rocketknight1/temp-colab-upload-test4
Rocketknight1
2022-03-25T15:06:46Z
5
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-25T15:06:07Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/temp-colab-upload-test4 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/temp-colab-upload-test4 This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0000 - Validation Loss: 0.0000 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0000 | 0.0000 | 0 | | 0.0000 | 0.0000 | 1 | ### Framework versions - Transformers 4.18.0.dev0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
huggingtweets/rivatez
huggingtweets
2022-03-25T14:57:29Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-25T14:51:51Z
--- language: en thumbnail: http://www.huggingtweets.com/rivatez/1648220244511/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1421403684085374979/SoqYa6o3_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Riva</div> <div style="text-align: center; font-size: 14px;">@rivatez</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Riva. | Data | Riva | | --- | --- | | Tweets downloaded | 3178 | | Retweets | 780 | | Short tweets | 405 | | Tweets kept | 1993 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qe0i10s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rivatez's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/rivatez') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bigmorning/try-m-e-perplexity594
bigmorning
2022-03-25T13:33:19Z
10
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-25T13:28:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: try-m-e-perplexity594 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # try-m-e-perplexity594 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
vumichien/mobilebert-finetuned-ner
vumichien
2022-03-25T13:14:33Z
39
0
transformers
[ "transformers", "tf", "mobilebert", "token-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-25T13:12:31Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: tf-mobilebert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-mobilebert-finetuned-ner This model is a fine-tuned version of [mrm8488/mobilebert-finetuned-ner](https://huggingface.co/mrm8488/mobilebert-finetuned-ner) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
vumichien/mobilebert-uncased-squad-v2
vumichien
2022-03-25T13:09:07Z
72
0
transformers
[ "transformers", "tf", "mobilebert", "question-answering", "generated_from_keras_callback", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-25T13:07:29Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: tf-mobilebert-uncased-squad-v2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-mobilebert-uncased-squad-v2 This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
vumichien/emo-mobilebert
vumichien
2022-03-25T13:01:20Z
65
0
transformers
[ "transformers", "tf", "mobilebert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-25T12:59:51Z
--- tags: - generated_from_keras_callback model-index: - name: tf-emo-mobilebert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-emo-mobilebert This model is a fine-tuned version of [lordtt13/emo-mobilebert](https://huggingface.co/lordtt13/emo-mobilebert) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
vumichien/albert-base-v2-squad2
vumichien
2022-03-25T12:48:59Z
76
0
transformers
[ "transformers", "tf", "albert", "question-answering", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
question-answering
2022-03-25T12:48:04Z
--- tags: - generated_from_keras_callback model-index: - name: tf-albert-base-v2-squad2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-albert-base-v2-squad2 This model is a fine-tuned version of [twmkn9/albert-base-v2-squad2](https://huggingface.co/twmkn9/albert-base-v2-squad2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Tokenizers 0.11.6
microsoft/wavlm-base-sv
microsoft
2022-03-25T12:05:52Z
16,903
6
transformers
[ "transformers", "pytorch", "wavlm", "audio-xvector", "speech", "en", "arxiv:2110.13900", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - speech --- # WavLM-Base for Speaker Verification [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. The model was pre-trained on 960h of [Librispeech](https://huggingface.co/datasets/librispeech_asr). [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Fine-tuning details The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss [X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf) # Usage ## Speaker Verification ```python from transformers import Wav2Vec2FeatureExtractor, WavLMForXVector from datasets import load_dataset import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-sv') model = WavLMForXVector.from_pretrained('microsoft/wavlm-base-sv') # audio files are decoded on the fly inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt") embeddings = model(**inputs).embeddings embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() # the resulting embeddings can be used for cosine similarity-based retrieval cosine_sim = torch.nn.CosineSimilarity(dim=-1) similarity = cosine_sim(embeddings[0], embeddings[1]) threshold = 0.86 # the optimal threshold is dataset-dependent if similarity < threshold: print("Speakers are not the same!") ``` # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
scasutt/wav2vec2-base_toy_train_data_augment_0.1.csv
scasutt
2022-03-25T11:45:10Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-25T11:09:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_augment_0.1.csv results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_augment_0.1.csv This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3933 - Wer: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2787 | 0.84 | 200 | 3.5920 | 1.0 | | 3.0613 | 1.68 | 400 | 3.4069 | 1.0 | | 3.0481 | 2.52 | 600 | 3.4811 | 1.0 | | 2.896 | 3.36 | 800 | 2.3933 | 0.9997 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
Jezia/pytorch-pretrained-BigGAN
Jezia
2022-03-25T10:53:53Z
0
2
pytorch
[ "pytorch", "biggan", "dataset:ImageNet", "license:apache-2.0", "region:us" ]
null
2022-03-25T10:05:00Z
--- license: apache-2.0 library_name: pytorch tags: - biggan datasets: - ImageNet --- ## Model description This is an op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind [biggan-deep-128](https://tfhub.dev/deepmind/biggan-deep-128/1). ## Training and evaluation data Model is trained on [ImageNet dataset](https://tfhub.dev/s?dataset=imagenet-ilsvrc-2012-cls). The dataset consists of 10000 classes. All images are resized to 64 * 64 for the sake of convenience. The model takes noise as input and then Conv2DTranspose is used to do upsampling. The output shape consists of 128, 256, or 512 images depending on the model. ## How to use this model You can use this model to generate new images. ``` import torch from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample, save_as_images, display_in_terminal) model = BigGAN.from_pretrained('biggan-deep-256') ``` You can generate examples using a noise vector. ``` with torch.no_grad(): output = model(noise_vector, class_vector, truncation) ``` ## Intended use and biases This model is not intended for production. ### Generated images ![Example](./example.png) ### Credits @thomwolf Thomas Wolf @vfdev-5 vfdev
microsoft/wavlm-base-plus-sv
microsoft
2022-03-25T10:39:41Z
296,053
29
transformers
[ "transformers", "pytorch", "wavlm", "audio-xvector", "speech", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - speech --- # WavLM-Base-Plus for Speaker Verification [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Fine-tuning details The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss [X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf) # Usage ## Speaker Verification ```python from transformers import Wav2Vec2FeatureExtractor, WavLMForXVector from datasets import load_dataset import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-plus-sv') model = WavLMForXVector.from_pretrained('microsoft/wavlm-base-plus-sv') # audio files are decoded on the fly audio = [x["array"] for x in dataset[:2]["audio"]] inputs = feature_extractor(audio, padding=True, return_tensors="pt") embeddings = model(**inputs).embeddings embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() # the resulting embeddings can be used for cosine similarity-based retrieval cosine_sim = torch.nn.CosineSimilarity(dim=-1) similarity = cosine_sim(embeddings[0], embeddings[1]) threshold = 0.86 # the optimal threshold is dataset-dependent if similarity < threshold: print("Speakers are not the same!") ``` # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
eliasws/openApiT5-distilled-description-v3
eliasws
2022-03-25T09:30:37Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-25T09:25:50Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5547 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1109, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
RomanEnikeev/distilbert-base-uncased-finetuned-cola
RomanEnikeev
2022-03-25T09:13:46Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-25T06:47:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5670814703238499 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8265 - Matthews Correlation: 0.5671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5216 | 1.0 | 535 | 0.5536 | 0.4041 | | 0.3481 | 2.0 | 1070 | 0.5242 | 0.5206 | | 0.2372 | 3.0 | 1605 | 0.6162 | 0.5311 | | 0.1701 | 4.0 | 2140 | 0.7704 | 0.5461 | | 0.1304 | 5.0 | 2675 | 0.8265 | 0.5671 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Mr-Wick/Albert
Mr-Wick
2022-03-25T07:59:38Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "albert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-24T19:29:49Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Mr-Wick/Albert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Mr-Wick/Albert This model is a fine-tuned version of [Mr-Wick/Albert](https://huggingface.co/Mr-Wick/Albert) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4248 - Train End Logits Accuracy: 0.3423 - Train Loss Accuracy: 0.0664 - Train Start Logits Accuracy: 0.3437 - Validation Loss: 0.9468 - Validation End Logits Accuracy: 0.4724 - Validation Loss Accuracy: 0.0591 - Validation Start Logits Accuracy: 0.4772 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16494, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Loss Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Loss Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:-------------------:|:---------------------------:|:---------------:|:------------------------------:|:------------------------:|:--------------------------------:|:-----:| | 0.6581 | 0.3488 | 0.0671 | 0.3529 | 0.9366 | 0.4415 | 0.0657 | 0.4486 | 0 | | 0.4248 | 0.3423 | 0.0664 | 0.3437 | 0.9468 | 0.4724 | 0.0591 | 0.4772 | 1 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
DerekCox/Zlewkuchenny
DerekCox
2022-03-25T07:48:51Z
0
0
null
[ "license:afl-3.0", "region:us" ]
null
2022-03-25T07:44:29Z
--- license: afl-3.0 --- Zlewozmywak Nivito to idealny dodatek do każdej luksusowej kuchni. Elegancki, nowoczesny design i wysokiej jakości konstrukcja sprawią, że będzie to ulubione miejsce do mycia naczyń lub przygotowywania posiłków. Ponadto jego głęboka miska pomieści duże garnki i patelnie. Dlaczego więc nie dodać odrobiny luksusu do swojego domu dzięki zlewozmywakowi kuchennemu od Nivito? https://www.nivito.pl/
Ebtihal/AraBertMo_base_V9
Ebtihal
2022-03-25T07:25:05Z
18
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Arabic Model AraBertMo_base_V9 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V9' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
docto/Docto-Bot
docto
2022-03-25T04:33:28Z
5
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:afl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-24T06:17:08Z
--- license: afl-3.0 --- # Docto Bot ## Usage (HuggingFace Transformers) ``` pip install -U transformers ``` ```python import random from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("docto/Docto-Bot") model = AutoModelForCausalLM.from_pretrained("docto/Docto-Bot") special_token = '<|endoftext|>' prompt_text = 'Question: I am having fever\nAnswer:' #prompt_text = f'Question: {userinput}\nAnswer:' encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens = False, return_tensors = 'pt') output_sequences = model.generate( input_ids = encoded_prompt, max_length = 700, temperature = 0.9, top_k = 20, top_p = 0.9, repetition_penalty = 1, do_sample = True, num_return_sequences = 4 ) result = tokenizer.decode(random.choice(output_sequences)) result = result[result.index("Answer: "):result.index(special_token)] print(result[8:]) ``` ## Training Data The Docto-Bot was trained on [Medical Question/Answer dataset](https://github.com/LasseRegin/medical-question-answer-data)
bdotloh/twitter-roberta-base-finetuned-twitter-user-desc
bdotloh
2022-03-25T04:12:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-21T07:33:40Z
--- tags: - generated_from_trainer model-index: - name: twitter-roberta-base-finetuned-twitter-user-desc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-finetuned-twitter-user-desc This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on a dataset of twitter user descriptions. It achieves the following results on the evaluation set: - eval_perplexity: 2.33 - epoch: 15 - step: 10635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bigmorning/try-m
bigmorning
2022-03-25T04:01:22Z
4
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-24T16:02:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: try-m results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # try-m This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
espnet/marathi_openslr64_wav2vec2_asrconformer5
espnet
2022-03-25T03:25:20Z
0
0
null
[ "tensorboard", "region:us" ]
null
2022-03-23T21:14:55Z
<!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Mar 23 05:58:21 UTC 2022` - python version: `3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:24:11) [GCC 9.4.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `1991a25855821b8b61d775681aa0cdfd6161bbc8` - Commit date: `Mon Mar 21 22:19:19 2022 +0800` ## asr_train_asr_conformer5_raw_mr_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave/dev_mr|137|1563|80.2|17.1|2.8|2.0|21.8|71.5| |inference_asr_model_valid.acc.ave/test_mr|200|2536|73.9|20.8|5.4|1.1|27.2|82.0| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|1563|81.3|15.6|3.1|2.0|20.7|72.3| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|2536|76.6|20.7|2.7|0.9|24.3|80.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave/dev_mr|137|9369|93.7|2.8|3.5|2.3|8.6|71.5| |inference_asr_model_valid.acc.ave/test_mr|200|14174|90.3|3.7|5.9|1.6|11.3|82.0| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|9369|92.4|3.8|3.8|2.7|10.2|72.3| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|14174|88.3|7.6|4.1|2.7|14.4|80.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave/dev_mr|137|6050|90.0|5.6|4.5|2.4|12.4|71.5| |inference_asr_model_valid.acc.ave/test_mr|200|9254|85.6|7.6|6.8|1.6|16.0|82.0| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|6050|88.8|7.0|4.2|2.7|13.9|72.3| |inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|9254|83.2|12.3|4.5|3.9|20.7|80.5|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter-regularisation
sanchit-gandhi
2022-03-25T03:10:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-22T10:13:48Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.7177 - Wer: 0.1283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.1228 | 1.68 | 1500 | 6.0490 | 1.1433 | | 5.4173 | 3.36 | 3000 | 5.3453 | 1.4878 | | 4.1635 | 5.04 | 4500 | 4.4185 | 0.9644 | | 2.1246 | 6.73 | 6000 | 3.2089 | 0.5026 | | 1.88 | 8.41 | 7500 | 1.9886 | 0.3438 | | 1.2606 | 10.09 | 9000 | 1.4472 | 0.2487 | | 0.7492 | 11.77 | 10500 | 1.1716 | 0.1949 | | 0.8868 | 13.45 | 12000 | 1.0146 | 0.1702 | | 0.5078 | 15.13 | 13500 | 0.8821 | 0.1548 | | 0.4515 | 16.82 | 15000 | 0.8181 | 0.1417 | | 0.3902 | 18.5 | 16500 | 0.7765 | 0.1364 | | 0.3575 | 20.18 | 18000 | 0.7367 | 0.1333 | | 0.2903 | 21.86 | 19500 | 0.7211 | 0.1301 | | 0.2698 | 23.54 | 21000 | 0.7177 | 0.1283 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
A8month/RestNet-B
A8month
2022-03-25T01:52:29Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-25T01:52:29Z
--- license: apache-2.0 ---
voidful/wav2vec2-large-xlsr-53-tw-gpt
voidful
2022-03-24T23:08:57Z
37
3
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: zh-TW datasets: - common_voice tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Taiwanese Mandarin(zh-tw) by Voidful results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-TW type: common_voice args: zh-TW metrics: - name: Test CER type: cer value: 18.36 --- # Wav2Vec2-Large-XLSR-53-tw-gpt Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on zh-tw using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage [Colab trial](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing) ``` import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, AutoTokenizer, AutoModelWithLMHead ) import torch import re import sys model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") gpt_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def load_file_to_data(file): batch = {} speech, _ = torchaudio.load(file) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq return batch def predict(data): features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) gpt_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0) gpt_prob = torch.nn.functional.softmax(gpt_model(gpt_input).logits, dim=-1)[:voice_prob.size()[0],:] comb_pred_ids = torch.argmax(gpt_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) return decoded_results ``` Predict ```python predict(load_file_to_data('voice file path')) ``` ## Evaluation The model can be evaluated as follows on the zh-tw test data of Common Voice. CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese env setup: ``` !pip install editdistance !pip install torchaudio !pip install datasets transformers ``` ## Evaluation without LM: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER: 28.70`. `TIME: 04:08 min` ## Evaluation with GPT: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0) lm_prob = torch.nn.functional.softmax(lm_model(lm_input).logits, dim=-1)[:voice_prob.size()[0],:] comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 25.70`. `TIME: 06:04 min` ## Evaluation with GPT + beam search: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: sequences = [[[], 1.0]] pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) while True: all_candidates = list() exceed = False for seq in sequences: tokens, score = seq gpt_input = torch.tensor([tokenizer.cls_token_id]+tokens).to(device) gpt_prob = torch.nn.functional.softmax(lm_model(gpt_input).logits, dim=-1)[:len(gpt_input),:] if len(gpt_input) >= len(voice_prob): exceed = True comb_pred_ids = gpt_prob*voice_prob[:len(gpt_input)] v,i = torch.topk(comb_pred_ids,50,dim=-1) for tok_id,tok_prob in zip(i.tolist()[-1],v.tolist()[-1]): candidate = [tokens + [tok_id], score + -log(tok_prob)] all_candidates.append(candidate) ordered = sorted(all_candidates, key=lambda tup: tup[1]) sequences = ordered[:10] if exceed: break decoded_results.append(processor.decode(sequences[0][0])) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 18.36`. ## Evaluation with BERT: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelForMaskedLM model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0) mask_lm_prob = voice_prob.clone() for i in range(lm_input.shape[-1]): masked_lm_input = lm_input.clone() masked_lm_input[0][i] = torch.tensor(tokenizer.mask_token_id).to('cuda') lm_prob = torch.nn.functional.softmax(lm_model(masked_lm_input).logits, dim=-1).squeeze(0) mask_lm_prob[i] = lm_prob[i] comb_pred_ids = torch.argmax(mask_lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 25.57`. `TIME: 09:49 min` ## Evaluation with T-TA: setup ``` !git clone https://github.com/voidful/pytorch-tta.git !mv ./pytorch-tta/tta ./tta !wget https://github.com/voidful/pytorch-tta/releases/download/wiki_zh/wiki_zh.pt ``` ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from tta.modeling_tta import TTALMModel from transformers import AutoTokenizer import torch model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model = TTALMModel("bert-base-chinese") tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model.load_state_dict(torch.load("./wiki_zh.pt",map_location=torch.device('cuda'))) lm_model.to('cuda') lm_model.eval() model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0) lm_prob = torch.nn.functional.softmax(lm_model.forward(lm_input)[0], dim=-1).squeeze(0) comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER: 25.77`. `TIME: 06:01 min`
Paul-Vinh/bert-base-multilingual-cased-finetuned-squad
Paul-Vinh
2022-03-24T22:47:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-24T19:22:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-multilingual-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.9982 | 1.0 | 5555 | 0.9436 | | 0.7694 | 2.0 | 11110 | 0.9356 | | 0.5627 | 3.0 | 16665 | 1.0122 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
huggingtweets/iopred
huggingtweets
2022-03-24T22:38:36Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-24T08:39:05Z
--- language: en thumbnail: http://www.huggingtweets.com/iopred/1648161500488/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/804464329202409472/_-74eUkS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">diet dr. kit</div> <div style="text-align: center; font-size: 14px;">@iopred</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from diet dr. kit. | Data | diet dr. kit | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 177 | | Short tweets | 258 | | Tweets kept | 2805 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/52vmud4n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iopred's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i464eff) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i464eff/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/iopred') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
JavierIA/es-en
JavierIA
2022-03-24T21:40:13Z
6
0
transformers
[ "transformers", "pytorch", "jax", "marian", "text2text-generation", "translation", "en", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-24T21:36:02Z
--- language: - en - es tags: - translation license: apache-2.0 --- ### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
SergioCabrera/DemoCaballos
SergioCabrera
2022-03-24T20:20:55Z
0
0
null
[ "region:us" ]
null
2022-03-24T19:34:40Z
!pip install -Uqq fastbook !pip install fastai==2.5 import fastbook fastbook.setup_book() from fastbook import * from fastai.vision.widgets import * path = Path('/content/gdrive/My Drive/caballos') modelo = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=RandomResizedCrop(224, min_scale=0.5), batch_tfms=aug_transforms()) dls = modelo.dataloaders(path) learn = cnn_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(4)
Monkeyking/DialoGPT-Darky
Monkeyking
2022-03-24T18:40:36Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-24T18:40:36Z
--- license: artistic-2.0 ---
optimum/all-MiniLM-L6-v2
optimum
2022-03-24T16:16:57Z
74,671
17
sentence-transformers
[ "sentence-transformers", "onnx", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-24T16:15:58Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # ONNX convert all-MiniLM-L6-v2 ## Conversion of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
optimum/bert-base-NER
optimum
2022-03-24T16:14:52Z
26
2
transformers
[ "transformers", "onnx", "token-classification", "en", "dataset:conll2003", "arxiv:1810.04805", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-24T16:13:41Z
--- language: en datasets: - conll2003 license: mit --- # ONNX convert of bert-base-NER ## Conversion of [bert-base-NER](https://huggingface.co/dslim/bert-base-NER) ## Model description **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC). Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "My name is Wolfgang and I live in Berlin" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases. ## Training data This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location ### CoNLL-2003 English Dataset Statistics This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper. #### # of training examples per entity type Dataset|LOC|MISC|ORG|PER -|-|-|-|- Train|7140|3438|6321|6600 Dev|1837|922|1341|1842 Test|1668|702|1661|1617 #### # of articles/sentences/tokens per dataset Dataset |Articles |Sentences |Tokens -|-|-|- Train |946 |14,987 |203,621 Dev |216 |3,466 |51,362 Test |231 |3,684 |46,435 ## Training procedure This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task. ## Eval results metric|dev|test -|-|- f1 |95.1 |91.3 precision |95.0 |90.7 recall |95.3 |91.9 The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223). ### BibTeX entry and citation info ``` @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ``` @inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", } ```
espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2
espnet
2022-03-24T12:42:03Z
1
0
espnet
[ "espnet", "tensorboard", "audio", "automatic-speech-recognition", "en", "dataset:DSTC2", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-24T12:03:09Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - DSTC2 license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2` This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wenmengzhou/test_image
wenmengzhou
2022-03-24T12:21:59Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-24T12:21:59Z
--- license: apache-2.0 ---
DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2
DrishtiSharma
2022-03-24T11:58:51Z
0
1
null
[ "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "as", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:common_voice", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - as license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - as - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-as-with-LM-v2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: hsb metrics: - name: Test WER type: wer value: [] - name: Test CER type: cer value: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ### Note: Files are missing. Probably, didn't get (git)pushed properly. :( This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.1679 - Wer: 0.5761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 8.3852 | 10.51 | 200 | 3.6402 | 1.0 | | 3.5374 | 21.05 | 400 | 3.3894 | 1.0 | | 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 | | 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 | | 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 | | 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 | | 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 | | 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 | | 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 | | 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 | | 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 | | 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 | | 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 | | 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 | | 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 | | 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 | | 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 | | 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 | | 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
sammy786/wav2vec2-xlsr-romansh_sursilvan
sammy786
2022-03-24T11:58:43Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "rm-sursilv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - rm-sursilv license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - rm-sursilv - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: sammy786/wav2vec2-xlsr-romansh_sursilvan results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: rm-sursilv metrics: - name: Test WER type: wer value: 13.82 - name: Test CER type: cer value: 3.02 --- # sammy786/wav2vec2-xlsr-romansh_sursilvan This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-sursilv dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 16.38 - Wer: 21.25 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 200 | 4.825500 | 2.932350 | 1.000000 | | 400 | 1.325600 | 0.292645 | 0.415436 | | 600 | 0.709800 | 0.219167 | 0.324451 | | 800 | 0.576800 | 0.174390 | 0.275477 | | 1000 | 0.538100 | 0.183737 | 0.272116 | | 1200 | 0.475200 | 0.159078 | 0.253871 | | 1400 | 0.420400 | 0.167277 | 0.240907 | | 1600 | 0.393500 | 0.167216 | 0.247269 | | 1800 | 0.407500 | 0.178282 | 0.239827 | | 2000 | 0.374400 | 0.184590 | 0.239467 | | 2200 | 0.382600 | 0.164106 | 0.227824 | | 2400 | 0.363100 | 0.162543 | 0.228544 | | 2600 | 0.199000 | 0.172903 | 0.231665 | | 2800 | 0.150800 | 0.160117 | 0.222662 | | 3000 | 0.101100 | 0.169553 | 0.222662 | | 3200 | 0.104200 | 0.161056 | 0.220622 | | 3400 | 0.096900 | 0.161562 | 0.216781 | | 3600 | 0.092200 | 0.163880 | 0.212580 | | 3800 | 0.089200 | 0.162288 | 0.214140 | | 4000 | 0.076200 | 0.160470 | 0.213540 | | 4200 | 0.087900 | 0.162827 | 0.213060 | | 4400 | 0.066200 | 0.161096 | 0.213300 | | 4600 | 0.076000 | 0.162060 | 0.213660 | | 4800 | 0.071400 | 0.162045 | 0.213300 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_sursilvan --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test ```
sammy786/wav2vec2-xlsr-dhivehi
sammy786
2022-03-24T11:58:38Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - dv license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - dv - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: sammy786/wav2vec2-xlsr-dhivehi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: dv metrics: - name: Test WER type: wer value: 26.91 - name: Test CER type: cer value: 4.02 --- # sammy786/wav2vec2-xlsr-dhivehi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - dv dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 14.86 - Wer: 29.32 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 8 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 200 | 4.883800 | 3.190218 | 1.000000 | | 400 | 1.600100 | 0.497887 | 0.726159 | | 600 | 0.928500 | 0.358781 | 0.603892 | | 800 | 0.867900 | 0.309132 | 0.570786 | | 1000 | 0.743100 | 0.309116 | 0.552954 | | 1200 | 0.725100 | 0.266839 | 0.538378 | | 1400 | 0.786200 | 0.259797 | 0.535897 | | 1600 | 0.655700 | 0.245691 | 0.517290 | | 1800 | 0.650500 | 0.246957 | 0.516204 | | 2000 | 0.685500 | 0.234808 | 0.516204 | | 2200 | 0.487100 | 0.228409 | 0.507753 | | 2400 | 0.401300 | 0.221087 | 0.495968 | | 2600 | 0.359300 | 0.212476 | 0.489301 | | 2800 | 0.347300 | 0.204848 | 0.487750 | | 3000 | 0.327000 | 0.203163 | 0.478756 | | 3200 | 0.337100 | 0.210235 | 0.487595 | | 3400 | 0.308900 | 0.201471 | 0.491316 | | 3600 | 0.292600 | 0.192437 | 0.476120 | | 3800 | 0.289600 | 0.198398 | 0.468445 | | 4000 | 0.290200 | 0.193484 | 0.467204 | | 4200 | 0.272600 | 0.193999 | 0.470150 | | 4400 | 0.266700 | 0.187384 | 0.460769 | | 4600 | 0.253800 | 0.187279 | 0.476663 | | 4800 | 0.266400 | 0.197395 | 0.466817 | | 5000 | 0.258000 | 0.188920 | 0.456660 | | 5200 | 0.237200 | 0.180770 | 0.457358 | | 5400 | 0.237900 | 0.178149 | 0.448287 | | 5600 | 0.232600 | 0.179827 | 0.461002 | | 5800 | 0.228500 | 0.182142 | 0.445185 | | 6000 | 0.221000 | 0.173619 | 0.440688 | | 6200 | 0.219500 | 0.172291 | 0.442859 | | 6400 | 0.219400 | 0.173339 | 0.430609 | | 6600 | 0.201900 | 0.177552 | 0.426423 | | 6800 | 0.199000 | 0.173157 | 0.429834 | | 7000 | 0.200000 | 0.166503 | 0.423709 | | 7200 | 0.194600 | 0.171812 | 0.429834 | | 7400 | 0.192100 | 0.164989 | 0.420530 | | 7600 | 0.185000 | 0.168355 | 0.418825 | | 7800 | 0.175100 | 0.168128 | 0.419290 | | 8000 | 0.173500 | 0.167959 | 0.424950 | | 8200 | 0.172200 | 0.173643 | 0.414793 | | 8400 | 0.164200 | 0.167020 | 0.406342 | | 8600 | 0.170800 | 0.168050 | 0.405334 | | 8800 | 0.157900 | 0.164290 | 0.396573 | | 9000 | 0.159900 | 0.163188 | 0.397426 | | 9200 | 0.151700 | 0.164370 | 0.390991 | | 9400 | 0.146600 | 0.165053 | 0.392852 | | 9600 | 0.142200 | 0.164939 | 0.391844 | | 9800 | 0.148300 | 0.164422 | 0.385719 | | 10000 | 0.136200 | 0.166569 | 0.385951 | | 10200 | 0.140700 | 0.161377 | 0.379594 | | 10400 | 0.133300 | 0.165194 | 0.378276 | | 10600 | 0.131300 | 0.164328 | 0.369205 | | 10800 | 0.135500 | 0.160254 | 0.373236 | | 11000 | 0.121100 | 0.163522 | 0.372693 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-dhivehi --dataset mozilla-foundation/common_voice_8_0 --config dv --split test ```
reichenbach/wav2vec2-large-xls-r-300m-as
reichenbach
2022-03-24T11:58:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "as", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 language: - as tags: - generated_from_trainer - robust-speech-event - hf-asr-leaderboard datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-as results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-as This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.8318 - Wer: 0.5174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.12 - num_epochs: 120 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.882 | 25.0 | 400 | 1.2290 | 0.8182 | | 0.8275 | 50.0 | 800 | 0.6835 | 0.5398 | | 0.337 | 75.0 | 1200 | 0.7789 | 0.5107 | | 0.2113 | 100.0 | 1600 | 0.8318 | 0.5174 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 ### Test Evaluation Common Voice Assamese Test Set (v7.0) - WER: 0.7224 - CER: 0.2882
infinitejoy/wav2vec2-large-xls-r-300m-sakha
infinitejoy
2022-03-24T11:58:14Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sah", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - sah license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - sah - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Sakha results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sah metrics: - name: Test WER type: wer value: 44.196 - name: Test CER type: cer value: 10.271 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sakha This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SAH dataset. It achieves the following results on the evaluation set: - Loss: 0.4995 - Wer: 0.4421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8597 | 8.47 | 500 | 0.7731 | 0.7211 | | 1.2508 | 16.95 | 1000 | 0.5368 | 0.5989 | | 1.1066 | 25.42 | 1500 | 0.5034 | 0.5533 | | 1.0064 | 33.9 | 2000 | 0.4686 | 0.5114 | | 0.9324 | 42.37 | 2500 | 0.4927 | 0.5056 | | 0.876 | 50.85 | 3000 | 0.4734 | 0.4795 | | 0.8082 | 59.32 | 3500 | 0.4748 | 0.4799 | | 0.7604 | 67.8 | 4000 | 0.4949 | 0.4691 | | 0.7241 | 76.27 | 4500 | 0.5090 | 0.4627 | | 0.6739 | 84.75 | 5000 | 0.4967 | 0.4452 | | 0.6447 | 93.22 | 5500 | 0.5071 | 0.4437 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-romansh-vallader
infinitejoy
2022-03-24T11:58:11Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "rm-vallader", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - rm-vallader license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - rm-vallader - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Romansh Vallader results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: rm-vallader metrics: - name: Test WER type: wer value: 31.689 - name: Test CER type: cer value: 7.202 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-romansh-vallader This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: - Loss: 0.3155 - Wer: 0.3162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9556 | 15.62 | 500 | 2.9300 | 1.0 | | 1.7874 | 31.25 | 1000 | 0.7566 | 0.6509 | | 1.0131 | 46.88 | 1500 | 0.3671 | 0.3828 | | 0.8439 | 62.5 | 2000 | 0.3350 | 0.3416 | | 0.7502 | 78.12 | 2500 | 0.3155 | 0.3296 | | 0.7093 | 93.75 | 3000 | 0.3182 | 0.3186 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-marathi-cv8
infinitejoy
2022-03-24T11:58:09Z
235
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - mr license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - mr - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R-300M - Marathi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: mr metrics: - name: Test WER type: wer value: 55.716 - name: Test CER type: cer value: 13.842 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-marathi-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: - Loss: 0.6483 - Wer: 0.6049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.671 | 22.73 | 500 | 1.3618 | 0.9499 | | 1.1599 | 45.45 | 1000 | 0.6330 | 0.6627 | | 0.8252 | 68.18 | 1500 | 0.6226 | 0.6426 | | 0.6424 | 90.91 | 2000 | 0.6359 | 0.6041 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-hausa
infinitejoy
2022-03-24T11:58:04Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ha", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ha license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - ha - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Hausa results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: ha metrics: - name: Test WER type: wer value: 100 - name: Test CER type: cer value: 132.32 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hausa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HA dataset. It achieves the following results on the evaluation set: - Loss: 0.5756 - Wer: 0.6014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7064 | 11.36 | 500 | 2.7112 | 1.0 | | 1.3079 | 22.73 | 1000 | 0.7337 | 0.7776 | | 1.0919 | 34.09 | 1500 | 0.5938 | 0.7023 | | 0.9546 | 45.45 | 2000 | 0.5698 | 0.6133 | | 0.8895 | 56.82 | 2500 | 0.5739 | 0.6142 | | 0.8152 | 68.18 | 3000 | 0.5579 | 0.6091 | | 0.7703 | 79.55 | 3500 | 0.5813 | 0.6210 | | 0.732 | 90.91 | 4000 | 0.5756 | 0.5860 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8
infinitejoy
2022-03-24T11:58:01Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "br", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - br license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - br - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R-300M - Breton results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: br metrics: - name: Test WER type: wer value: 54.855 - name: Test CER type: cer value: 17.865 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300M - Breton This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: NA - Wer: NA ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: ### Training results NA ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset mozilla-foundation/common_voice_8_0 --config br --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset speech-recognition-community-v2/dev_data --config br --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "br", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text ``` ### Eval results on Common Voice 7 "test" (WER): NA
anuragshas/wav2vec2-large-xls-r-300m-ur-cv8
anuragshas
2022-03-24T11:57:44Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "ur", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ur license: apache-2.0 tags: - generated_from_trainer - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-ur-cv8 results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: mozilla-foundation/common_voice_8_0 name: Common Voice 8 args: ur metrics: - type: wer value: 42.376 name: Test WER - name: Test CER type: cer value: 18.18 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ur-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.1443 - Wer: 0.5677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 3.6269 | 15.98 | 400 | 3.3246 | 1.0 | | 3.0546 | 31.98 | 800 | 2.8148 | 0.9963 | | 1.4589 | 47.98 | 1200 | 1.0237 | 0.6584 | | 1.0911 | 63.98 | 1600 | 0.9524 | 0.5966 | | 0.8879 | 79.98 | 2000 | 0.9827 | 0.5822 | | 0.7467 | 95.98 | 2400 | 0.9923 | 0.5840 | | 0.6427 | 111.98 | 2800 | 0.9988 | 0.5714 | | 0.5685 | 127.98 | 3200 | 1.0872 | 0.5807 | | 0.5068 | 143.98 | 3600 | 1.1194 | 0.5822 | | 0.463 | 159.98 | 4000 | 1.1138 | 0.5692 | | 0.4212 | 175.98 | 4400 | 1.1232 | 0.5714 | | 0.4056 | 191.98 | 4800 | 1.1443 | 0.5677 | ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ur --split test ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "anuragshas/wav2vec2-large-xls-r-300m-ur-cv8" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text # => "اب نے ٹ پیس ان لیتے ہیں" ``` ### Eval results on Common Voice 8 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | 52.146 | 42.376 |
anuragshas/wav2vec2-large-xls-r-300m-ha-cv8
anuragshas
2022-03-24T11:57:39Z
18
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "ha", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ha license: apache-2.0 tags: - generated_from_trainer - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 metrics: - wer model-index: - name: XLS-R-300M - Hausa results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: mozilla-foundation/common_voice_8_0 name: Common Voice 8 args: ha metrics: - type: wer value: 36.295 name: Test WER - name: Test CER type: cer value: 11.073 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300M - Hausa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6094 - Wer: 0.5234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9599 | 6.56 | 400 | 2.8650 | 1.0 | | 2.7357 | 13.11 | 800 | 2.7377 | 0.9951 | | 1.3012 | 19.67 | 1200 | 0.6686 | 0.7111 | | 1.0454 | 26.23 | 1600 | 0.5686 | 0.6137 | | 0.9069 | 32.79 | 2000 | 0.5576 | 0.5815 | | 0.82 | 39.34 | 2400 | 0.5502 | 0.5591 | | 0.7413 | 45.9 | 2800 | 0.5970 | 0.5586 | | 0.6872 | 52.46 | 3200 | 0.5817 | 0.5428 | | 0.634 | 59.02 | 3600 | 0.5636 | 0.5314 | | 0.6022 | 65.57 | 4000 | 0.5780 | 0.5229 | | 0.5705 | 72.13 | 4400 | 0.6036 | 0.5323 | | 0.5408 | 78.69 | 4800 | 0.6119 | 0.5336 | | 0.5225 | 85.25 | 5200 | 0.6105 | 0.5270 | | 0.5265 | 91.8 | 5600 | 0.6034 | 0.5231 | | 0.5154 | 98.36 | 6000 | 0.6094 | 0.5234 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ha --split test ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "anuragshas/wav2vec2-large-xls-r-300m-ha-cv8" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ha", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text # => "kakin hade ya ke da kyautar" ``` ### Eval results on Common Voice 8 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | 47.821 | 36.295 |
RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt
RuudVelo
2022-03-24T11:57:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - mt license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - mt - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-1b-cv8-mt results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: mt metrics: - name: Test WER type: wer value: 17.57 - name: Test CER type: cer value: 3.86 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: mt metrics: - name: Test WER type: wer value: null - name: Test CER type: cer value: null --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-cv8-mt This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2210 - Wer: 0.1974 ## Model description Note: another version of this model is available with a KenLM 3gram model. This model performs better than this model. See https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following config and hyperparameters were used during training: model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-xls-r-1b", attention_dropout=0.05, hidden_dropout=0.05, feat_proj_dropout=0.05, mask_time_prob=0.55, mask_feature_prob=0.10, layerdrop=0.05, ctc_zero_infinity=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ) from transformers import TrainingArguments training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=32, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=50, gradient_checkpointing=True, fp16=True, save_steps=400, eval_steps=400, logging_steps=400, learning_rate=5.5e-05, warmup_steps=500, save_total_limit=2, push_to_hub=True, report_to="tensorboard") ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4564 | 13.33 | 400 | 0.3783 | 0.3981 | | 0.7931 | 26.66 | 800 | 0.2377 | 0.2298 | | 0.5364 | 39.98 | 1200 | 0.2210 | 0.1974 | Note that the test WER of 19.74 is different than the above reported 17.57. This was due to a bug which was found while processing files with an older version of the datasets library. The right library is listed below. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm
RuudVelo
2022-03-24T11:57:33Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - mt license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - mt - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-1b-cv8-mt-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: mt metrics: - name: Test WER type: wer value: 15.88 - name: Test CER type: cer value: 3.65 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: mt metrics: - name: Test WER type: wer value: null - name: Test CER type: cer value: null --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-cv8-mt-lm This model is a fine-tuned version of [wav2vec2-large-xls-r-1b-cv8-mt-lm](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice 8 dataset. It achieves the following results on the test set: - Loss: 0.2210 - Wer: 0.1974 Note that the above test results come from the original model without LM (language model) which can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt. The results with the LM model can be found on the right side of this model card. ## Model description Model RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt which has been improved with a KenLM 3-gram. ## Intended uses & limitations More information needed ## Training and evaluation data Common Voice 8 mt dataset has been used for the model ## Training procedure ### Training hyperparameters The following config and hyperparameters were used during training: model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-xls-r-1b", attention_dropout=0.05, hidden_dropout=0.05, feat_proj_dropout=0.05, mask_time_prob=0.55, mask_feature_prob=0.10, layerdrop=0.05, ctc_zero_infinity=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ) from transformers import TrainingArguments training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=32, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=50, gradient_checkpointing=True, fp16=True, save_steps=400, eval_steps=400, logging_steps=400, learning_rate=5.5e-05, warmup_steps=500, save_total_limit=2, push_to_hub=True, report_to="tensorboard") ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal
NbAiLab
2022-03-24T11:57:25Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer - automatic-speech-recognition - NbAiLab/NPSC - robust-speech-event - false - nb-NO - hf-asr-leaderboard datasets: - NbAiLab/NPSC language: - nb-NO model-index: - name: wav2vec2-xls-r-1b-npsc-bokmaal results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: NPSC type: NbAiLab/NPSC args: 16K_mp3_bokmaal metrics: - name: "Test (Bokm\xE5l) WER" type: wer value: 0.07901700231893541 - name: "Test (Bokm\xE5l) CER" type: cer value: 0.029734583252347752 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-npsc This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [NbAiLab/NPSC (16K_mp3_bokmaal)](https://huggingface.co/datasets/NbAiLab/NPSC/viewer/16K_mp3_bokmaal/train) dataset. It achieves the following results on the evaluation set: - Loss: 0.1598 - WER: 0.0966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.8361 | 0.32 | 500 | 0.6304 | 0.4970 | | 0.5703 | 0.64 | 1000 | 0.3195 | 0.2775 | | 0.5451 | 0.97 | 1500 | 0.2700 | 0.2246 | | 0.47 | 1.29 | 2000 | 0.2564 | 0.2329 | | 0.4063 | 1.61 | 2500 | 0.2459 | 0.2099 | | 0.374 | 1.93 | 3000 | 0.2175 | 0.1894 | | 0.3297 | 2.26 | 3500 | 0.2036 | 0.1755 | | 0.3145 | 2.58 | 4000 | 0.1957 | 0.1757 | | 0.3989 | 2.9 | 4500 | 0.1923 | 0.1723 | | 0.271 | 3.22 | 5000 | 0.1889 | 0.1649 | | 0.2758 | 3.55 | 5500 | 0.1768 | 0.1588 | | 0.2683 | 3.87 | 6000 | 0.1720 | 0.1534 | | 0.2341 | 4.19 | 6500 | 0.1689 | 0.1471 | | 0.2316 | 4.51 | 7000 | 0.1706 | 0.1405 | | 0.2383 | 4.84 | 7500 | 0.1637 | 0.1426 | | 0.2148 | 5.16 | 8000 | 0.1584 | 0.1347 | | 0.2085 | 5.48 | 8500 | 0.1601 | 0.1387 | | 0.2944 | 5.8 | 9000 | 0.1566 | 0.1294 | | 0.1944 | 6.13 | 9500 | 0.1494 | 0.1271 | | 0.1853 | 6.45 | 10000 | 0.1561 | 0.1247 | | 0.235 | 6.77 | 10500 | 0.1461 | 0.1215 | | 0.2286 | 7.09 | 11000 | 0.1447 | 0.1167 | | 0.1781 | 7.41 | 11500 | 0.1502 | 0.1199 | | 0.1714 | 7.74 | 12000 | 0.1425 | 0.1179 | | 0.1725 | 8.06 | 12500 | 0.1427 | 0.1173 | | 0.143 | 8.38 | 13000 | 0.1448 | 0.1142 | | 0.154 | 8.7 | 13500 | 0.1392 | 0.1104 | | 0.1447 | 9.03 | 14000 | 0.1404 | 0.1094 | | 0.1471 | 9.35 | 14500 | 0.1404 | 0.1088 | | 0.1479 | 9.67 | 15000 | 0.1414 | 0.1133 | | 0.1607 | 9.99 | 15500 | 0.1458 | 0.1171 | | 0.166 | 10.32 | 16000 | 0.1652 | 0.1264 | | 0.188 | 10.64 | 16500 | 0.1713 | 0.1322 | | 0.1461 | 10.96 | 17000 | 0.1423 | 0.1111 | | 0.1289 | 11.28 | 17500 | 0.1388 | 0.1097 | | 0.1273 | 11.61 | 18000 | 0.1438 | 0.1074 | | 0.1317 | 11.93 | 18500 | 0.1312 | 0.1066 | | 0.1448 | 12.25 | 19000 | 0.1446 | 0.1042 | | 0.1424 | 12.57 | 19500 | 0.1386 | 0.1015 | | 0.1392 | 12.89 | 20000 | 0.1379 | 0.1005 | | 0.1408 | 13.22 | 20500 | 0.1408 | 0.0992 | | 0.1239 | 13.54 | 21000 | 0.1338 | 0.0968 | | 0.1244 | 13.86 | 21500 | 0.1335 | 0.0957 | | 0.1254 | 14.18 | 22000 | 0.1382 | 0.0950 | | 0.1597 | 14.51 | 22500 | 0.1544 | 0.0970 | | 0.1566 | 14.83 | 23000 | 0.1589 | 0.0963 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0
NbAiLab/wav2vec2-large-voxrex-npsc-bokmaal
NbAiLab
2022-03-24T11:57:23Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer - automatic-speech-recognition - NbAiLab/NPSC - robust-speech-event - false - nb-NO - hf-asr-leaderboard datasets: - NbAiLab/NPSC language: - nb-NO model-index: - name: wav2vec2-large-voxrex-npsc-bokmaal results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: NPSC type: NbAiLab/NPSC args: 16K_mp3_bokmaal metrics: - name: "Test (Bokm\xE5l) WER" type: wer value: 0.07028972259374369 - name: "Test (Bokm\xE5l) CER" type: cer value: 0.026870600821650645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-voxrex-npsc-bokmaal This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1311 - Wer: 0.1038 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.379967082059723e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2127 | 0.32 | 500 | 0.1335 | 0.1047 | | 0.1976 | 0.64 | 1000 | 0.1309 | 0.1039 | | 0.1887 | 0.97 | 1500 | 0.1306 | 0.1040 | | 0.18 | 1.29 | 2000 | 0.1311 | 0.1038 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
HarrisDePerceptron/xls-r-1b-ur
HarrisDePerceptron
2022-03-24T11:57:20Z
15
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ur", "robust-speech-event", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ur license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - ur - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: '' results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: ur metrics: - name: Test WER type: wer value: 44.13 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 0.9613 - Wer: 0.5376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3118 | 1.96 | 100 | 2.9093 | 0.9982 | | 2.2071 | 3.92 | 200 | 1.1737 | 0.7779 | | 1.6098 | 5.88 | 300 | 0.9984 | 0.7015 | | 1.4333 | 7.84 | 400 | 0.9800 | 0.6705 | | 1.2859 | 9.8 | 500 | 0.9582 | 0.6487 | | 1.2073 | 11.76 | 600 | 0.8841 | 0.6077 | | 1.1417 | 13.73 | 700 | 0.9118 | 0.6343 | | 1.0988 | 15.69 | 800 | 0.9217 | 0.6196 | | 1.0279 | 17.65 | 900 | 0.9165 | 0.5867 | | 0.9765 | 19.61 | 1000 | 0.9306 | 0.5978 | | 0.9161 | 21.57 | 1100 | 0.9305 | 0.5768 | | 0.8395 | 23.53 | 1200 | 0.9828 | 0.5819 | | 0.8306 | 25.49 | 1300 | 0.9397 | 0.5760 | | 0.7819 | 27.45 | 1400 | 0.9544 | 0.5742 | | 0.7509 | 29.41 | 1500 | 0.9278 | 0.5690 | | 0.7218 | 31.37 | 1600 | 0.9003 | 0.5587 | | 0.6725 | 33.33 | 1700 | 0.9659 | 0.5554 | | 0.6287 | 35.29 | 1800 | 0.9522 | 0.5561 | | 0.6077 | 37.25 | 1900 | 0.9154 | 0.5465 | | 0.5873 | 39.22 | 2000 | 0.9331 | 0.5469 | | 0.5621 | 41.18 | 2100 | 0.9335 | 0.5491 | | 0.5168 | 43.14 | 2200 | 0.9632 | 0.5458 | | 0.5114 | 45.1 | 2300 | 0.9349 | 0.5387 | | 0.4986 | 47.06 | 2400 | 0.9364 | 0.5380 | | 0.4761 | 49.02 | 2500 | 0.9584 | 0.5391 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-sl-a2
DrishtiSharma
2022-03-24T11:57:17Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - sl license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - sl - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-sl-a2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: sl metrics: - name: Test WER type: wer value: 0.21695212999560826 - name: Test CER type: cer value: 0.052850080572474256 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: vot metrics: - name: Test WER type: wer value: 0.560722380639029 - name: Test CER type: cer value: 0.2279626093074681 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sl metrics: - name: Test WER type: wer value: 56.07 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: sl metrics: - name: Test WER type: wer value: 56.19 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: - Loss: 0.2855 - Wer: 0.2401 ##Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Votic language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.9294 | 6.1 | 500 | 2.9712 | 1.0 | | 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 | | 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 | | 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 | | 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 | | 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 | | 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 | | 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 | | 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 | | 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 | | 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 | | 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 | | 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 | | 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 | | 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 | | 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-myv-a1
DrishtiSharma
2022-03-24T11:57:14Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "myv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - myv license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - myv - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-myv-a1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: myv metrics: - name: Test WER type: wer value: 0.6514672686230248 - name: Test CER type: cer value: 0.17226131905088124 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: vot metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: - Loss: 1.0356 - Wer: 0.6524 ### Evaluation Commands **1. To evaluate on mozilla-foundation/common_voice_8_0 with test split** python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs **2. To evaluate on speech-recognition-community-v2/dev_data** Erzya language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 5.649 | 9.62 | 500 | 3.0038 | 1.0 | | 1.6272 | 19.23 | 1000 | 0.7362 | 0.7819 | | 1.1354 | 28.85 | 1500 | 0.6410 | 0.7111 | | 1.0424 | 38.46 | 2000 | 0.6907 | 0.7431 | | 0.9293 | 48.08 | 2500 | 0.7249 | 0.7102 | | 0.8246 | 57.69 | 3000 | 0.7422 | 0.6966 | | 0.7837 | 67.31 | 3500 | 0.7413 | 0.6813 | | 0.7147 | 76.92 | 4000 | 0.7873 | 0.6930 | | 0.6276 | 86.54 | 4500 | 0.8038 | 0.6677 | | 0.6041 | 96.15 | 5000 | 0.8240 | 0.6831 | | 0.5336 | 105.77 | 5500 | 0.8748 | 0.6749 | | 0.4705 | 115.38 | 6000 | 0.9006 | 0.6497 | | 0.43 | 125.0 | 6500 | 0.8954 | 0.6551 | | 0.3859 | 134.62 | 7000 | 0.9074 | 0.6614 | | 0.3342 | 144.23 | 7500 | 0.9693 | 0.6560 | | 0.3155 | 153.85 | 8000 | 1.0073 | 0.6691 | | 0.2673 | 163.46 | 8500 | 1.0170 | 0.6632 | | 0.2409 | 173.08 | 9000 | 1.0304 | 0.6709 | | 0.2189 | 182.69 | 9500 | 0.9965 | 0.6546 | | 0.1973 | 192.31 | 10000 | 1.0360 | 0.6551 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 ### Evaluation Command !python eval.py \ --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \ --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1
DrishtiSharma
2022-03-24T11:57:12Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "rm-vallader", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - rm-vallader license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - rm-vallader - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-300m-rm-vallader-d1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: rm-vallader metrics: - name: Test WER type: wer value: 0.26472007722007723 - name: Test CER type: cer value: 0.05860608074430969 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: vot metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Wer: 0.2831 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Romansh-Vallader language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.927 | 15.15 | 500 | 2.9196 | 1.0 | | 1.3835 | 30.3 | 1000 | 0.5879 | 0.5866 | | 0.7415 | 45.45 | 1500 | 0.3077 | 0.3316 | | 0.5575 | 60.61 | 2000 | 0.2735 | 0.2954 | | 0.4581 | 75.76 | 2500 | 0.2707 | 0.2802 | | 0.3977 | 90.91 | 3000 | 0.2785 | 0.2809 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5
DrishtiSharma
2022-03-24T11:57:05Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "pa-IN", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - pa-IN license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - pa-IN - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-300m-pa-IN-r5 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: pa-IN metrics: - name: Test WER type: wer value: 0.4186593492747942 - name: Test CER type: cer value: 0.13301322550753938 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: pa-IN metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 0.8881 - Wer: 0.4175 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Punjabi language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 10.695 | 18.52 | 500 | 3.5681 | 1.0 | | 3.2718 | 37.04 | 1000 | 2.3081 | 0.9643 | | 0.8727 | 55.56 | 1500 | 0.7227 | 0.5147 | | 0.3349 | 74.07 | 2000 | 0.7498 | 0.4959 | | 0.2134 | 92.59 | 2500 | 0.7779 | 0.4720 | | 0.1445 | 111.11 | 3000 | 0.8120 | 0.4594 | | 0.1057 | 129.63 | 3500 | 0.8225 | 0.4610 | | 0.0826 | 148.15 | 4000 | 0.8307 | 0.4351 | | 0.0639 | 166.67 | 4500 | 0.8967 | 0.4316 | | 0.0528 | 185.19 | 5000 | 0.8875 | 0.4238 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
DrishtiSharma
2022-03-24T11:57:03Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mt", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - mt license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - mt - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-300m-mt-o1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: mt metrics: - name: Test WER type: wer value: 0.2378369069146646 - name: Test CER type: cer value: 0.050364163712536256 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: mt metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: - Loss: 0.1987 - Wer: 0.1920 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Maltese language not found in speech-recognition-community-v2/dev_data! ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.1721 | 18.02 | 2000 | 0.3831 | 0.4066 | | 0.7849 | 36.04 | 4000 | 0.2191 | 0.2417 | | 0.6723 | 54.05 | 6000 | 0.2056 | 0.2134 | | 0.6015 | 72.07 | 8000 | 0.2008 | 0.2031 | | 0.5386 | 90.09 | 10000 | 0.1967 | 0.1953 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final
DrishtiSharma
2022-03-24T11:56:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sat", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - sat license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - sat - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-sat-final results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: sat metrics: - name: Test WER type: wer value: 0.3493975903614458 - name: Test CER type: cer value: 0.13773314203730272 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sat metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sat-final This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: - Loss: 0.8012 - Wer: 0.3815 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev_data --config sat --split validation --chunk_length_s 10 --stride_length_s 1 **Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data** ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 170 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 10.6317 | 33.29 | 100 | 2.8629 | 1.0 | | 2.047 | 66.57 | 200 | 0.9516 | 0.5703 | | 0.4475 | 99.86 | 300 | 0.8539 | 0.3896 | | 0.0716 | 133.29 | 400 | 0.8277 | 0.3454 | | 0.047 | 166.57 | 500 | 0.7597 | 0.3655 | | 0.0249 | 199.86 | 600 | 0.8012 | 0.3815 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3
DrishtiSharma
2022-03-24T11:56:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sat", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - sat license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - sat - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-sat-a3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: sat metrics: - name: Test WER type: wer value: 0.357429718875502 - name: Test CER type: cer value: 0.14203730272596843 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sat metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-sat-a3 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: - Loss: 0.8961 - Wer: 0.3976 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.1266 | 33.29 | 100 | 2.8577 | 1.0 | | 2.1549 | 66.57 | 200 | 1.0799 | 0.5542 | | 0.5628 | 99.86 | 300 | 0.7973 | 0.4016 | | 0.0779 | 133.29 | 400 | 0.8424 | 0.4177 | | 0.0404 | 166.57 | 500 | 0.9048 | 0.4137 | | 0.0212 | 199.86 | 600 | 0.8961 | 0.3976 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
DrishtiSharma
2022-03-24T11:56:53Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "myv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - myv license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - myv - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-myv-v1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: myv metrics: - name: Test WER type: wer value: 0.599548532731377 - name: Test CER type: cer value: 0.12953851902597 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: myv metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-myv-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: - Loss: 0.8537 - Wer: 0.6160 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Erzya language not found in speech-recognition-community-v2/dev_data! ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000222 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 150 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 19.453 | 1.92 | 50 | 16.4001 | 1.0 | | 9.6875 | 3.85 | 100 | 5.4468 | 1.0 | | 4.9988 | 5.77 | 150 | 4.3507 | 1.0 | | 4.1148 | 7.69 | 200 | 3.6753 | 1.0 | | 3.4922 | 9.62 | 250 | 3.3103 | 1.0 | | 3.2443 | 11.54 | 300 | 3.1741 | 1.0 | | 3.164 | 13.46 | 350 | 3.1346 | 1.0 | | 3.0954 | 15.38 | 400 | 3.0428 | 1.0 | | 3.0076 | 17.31 | 450 | 2.9137 | 1.0 | | 2.6883 | 19.23 | 500 | 2.1476 | 0.9978 | | 1.5124 | 21.15 | 550 | 0.8955 | 0.8225 | | 0.8711 | 23.08 | 600 | 0.6948 | 0.7591 | | 0.6695 | 25.0 | 650 | 0.6683 | 0.7636 | | 0.5606 | 26.92 | 700 | 0.6821 | 0.7435 | | 0.503 | 28.85 | 750 | 0.7220 | 0.7516 | | 0.4528 | 30.77 | 800 | 0.6638 | 0.7324 | | 0.4219 | 32.69 | 850 | 0.7120 | 0.7435 | | 0.4109 | 34.62 | 900 | 0.7122 | 0.7511 | | 0.3887 | 36.54 | 950 | 0.7179 | 0.7199 | | 0.3895 | 38.46 | 1000 | 0.7322 | 0.7525 | | 0.391 | 40.38 | 1050 | 0.6850 | 0.7364 | | 0.3537 | 42.31 | 1100 | 0.7571 | 0.7279 | | 0.3267 | 44.23 | 1150 | 0.7575 | 0.7257 | | 0.3195 | 46.15 | 1200 | 0.7580 | 0.6998 | | 0.2891 | 48.08 | 1250 | 0.7452 | 0.7101 | | 0.294 | 50.0 | 1300 | 0.7316 | 0.6945 | | 0.2854 | 51.92 | 1350 | 0.7241 | 0.6757 | | 0.2801 | 53.85 | 1400 | 0.7532 | 0.6887 | | 0.2502 | 55.77 | 1450 | 0.7587 | 0.6811 | | 0.2427 | 57.69 | 1500 | 0.7231 | 0.6851 | | 0.2311 | 59.62 | 1550 | 0.7288 | 0.6632 | | 0.2176 | 61.54 | 1600 | 0.7711 | 0.6664 | | 0.2117 | 63.46 | 1650 | 0.7914 | 0.6940 | | 0.2114 | 65.38 | 1700 | 0.8065 | 0.6918 | | 0.1913 | 67.31 | 1750 | 0.8372 | 0.6945 | | 0.1897 | 69.23 | 1800 | 0.8051 | 0.6869 | | 0.1865 | 71.15 | 1850 | 0.8076 | 0.6740 | | 0.1844 | 73.08 | 1900 | 0.7935 | 0.6708 | | 0.1757 | 75.0 | 1950 | 0.8015 | 0.6610 | | 0.1636 | 76.92 | 2000 | 0.7614 | 0.6414 | | 0.1637 | 78.85 | 2050 | 0.8123 | 0.6592 | | 0.1599 | 80.77 | 2100 | 0.7907 | 0.6566 | | 0.1498 | 82.69 | 2150 | 0.8641 | 0.6757 | | 0.1545 | 84.62 | 2200 | 0.7438 | 0.6682 | | 0.1433 | 86.54 | 2250 | 0.8014 | 0.6624 | | 0.1427 | 88.46 | 2300 | 0.7758 | 0.6646 | | 0.1423 | 90.38 | 2350 | 0.7741 | 0.6423 | | 0.1298 | 92.31 | 2400 | 0.7938 | 0.6414 | | 0.1111 | 94.23 | 2450 | 0.7976 | 0.6467 | | 0.1243 | 96.15 | 2500 | 0.7916 | 0.6481 | | 0.1215 | 98.08 | 2550 | 0.7594 | 0.6392 | | 0.113 | 100.0 | 2600 | 0.8236 | 0.6392 | | 0.1077 | 101.92 | 2650 | 0.7959 | 0.6347 | | 0.0988 | 103.85 | 2700 | 0.8189 | 0.6392 | | 0.0953 | 105.77 | 2750 | 0.8157 | 0.6414 | | 0.0889 | 107.69 | 2800 | 0.7946 | 0.6369 | | 0.0929 | 109.62 | 2850 | 0.8255 | 0.6360 | | 0.0822 | 111.54 | 2900 | 0.8320 | 0.6334 | | 0.086 | 113.46 | 2950 | 0.8539 | 0.6490 | | 0.0825 | 115.38 | 3000 | 0.8438 | 0.6418 | | 0.0727 | 117.31 | 3050 | 0.8568 | 0.6481 | | 0.0717 | 119.23 | 3100 | 0.8447 | 0.6512 | | 0.0815 | 121.15 | 3150 | 0.8470 | 0.6445 | | 0.0689 | 123.08 | 3200 | 0.8264 | 0.6249 | | 0.0726 | 125.0 | 3250 | 0.7981 | 0.6169 | | 0.0648 | 126.92 | 3300 | 0.8237 | 0.6200 | | 0.0632 | 128.85 | 3350 | 0.8416 | 0.6249 | | 0.06 | 130.77 | 3400 | 0.8276 | 0.6173 | | 0.0616 | 132.69 | 3450 | 0.8429 | 0.6209 | | 0.0614 | 134.62 | 3500 | 0.8485 | 0.6271 | | 0.0539 | 136.54 | 3550 | 0.8598 | 0.6218 | | 0.0555 | 138.46 | 3600 | 0.8557 | 0.6169 | | 0.0604 | 140.38 | 3650 | 0.8436 | 0.6186 | | 0.0556 | 142.31 | 3700 | 0.8428 | 0.6178 | | 0.051 | 144.23 | 3750 | 0.8440 | 0.6142 | | 0.0526 | 146.15 | 3800 | 0.8566 | 0.6142 | | 0.052 | 148.08 | 3850 | 0.8544 | 0.6178 | | 0.0519 | 150.0 | 3900 | 0.8537 | 0.6160 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1
DrishtiSharma
2022-03-24T11:56:45Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hsb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - hsb license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - hsb - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-hsb-v1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: hsb metrics: - name: Test WER type: wer value: 0.4393 - name: Test CER type: cer value: 0.1036 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: hsb metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hsb-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: - Loss: 0.5684 - Wer: 0.4402 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00045 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 8.972 | 3.23 | 100 | 3.7498 | 1.0 | | 3.3401 | 6.45 | 200 | 3.2320 | 1.0 | | 3.2046 | 9.68 | 300 | 3.1741 | 0.9806 | | 2.4031 | 12.9 | 400 | 1.0579 | 0.8996 | | 1.0427 | 16.13 | 500 | 0.7989 | 0.7557 | | 0.741 | 19.35 | 600 | 0.6405 | 0.6299 | | 0.5699 | 22.58 | 700 | 0.6129 | 0.5928 | | 0.4607 | 25.81 | 800 | 0.6548 | 0.5695 | | 0.3827 | 29.03 | 900 | 0.6268 | 0.5190 | | 0.3282 | 32.26 | 1000 | 0.5919 | 0.5016 | | 0.2764 | 35.48 | 1100 | 0.5953 | 0.4805 | | 0.2335 | 38.71 | 1200 | 0.5717 | 0.4728 | | 0.2106 | 41.94 | 1300 | 0.5674 | 0.4569 | | 0.1859 | 45.16 | 1400 | 0.5685 | 0.4502 | | 0.1592 | 48.39 | 1500 | 0.5684 | 0.4402 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
DrishtiSharma
2022-03-24T11:56:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "bas", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - bas license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - bas - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xls-r-300m-bas-v1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: bas metrics: - name: Test WER type: wer value: 0.3566497929130234 - name: Test CER type: cer value: 0.1102657634184471 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: bas metrics: - name: Test WER type: wer value: NA - name: Test CER type: cer value: NA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bas-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset. It achieves the following results on the evaluation set: - Loss: 0.5997 - Wer: 0.3870 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.7076 | 5.26 | 200 | 3.6361 | 1.0 | | 3.1657 | 10.52 | 400 | 3.0101 | 1.0 | | 2.3987 | 15.78 | 600 | 0.9125 | 0.6774 | | 1.0079 | 21.05 | 800 | 0.6477 | 0.5352 | | 0.7392 | 26.31 | 1000 | 0.5432 | 0.4929 | | 0.6114 | 31.57 | 1200 | 0.5498 | 0.4639 | | 0.5222 | 36.83 | 1400 | 0.5220 | 0.4561 | | 0.4648 | 42.1 | 1600 | 0.5586 | 0.4289 | | 0.4103 | 47.36 | 1800 | 0.5337 | 0.4082 | | 0.3692 | 52.62 | 2000 | 0.5421 | 0.3861 | | 0.3403 | 57.88 | 2200 | 0.5549 | 0.4096 | | 0.3011 | 63.16 | 2400 | 0.5833 | 0.3925 | | 0.2932 | 68.42 | 2600 | 0.5674 | 0.3815 | | 0.2696 | 73.68 | 2800 | 0.5734 | 0.3889 | | 0.2496 | 78.94 | 3000 | 0.5968 | 0.3985 | | 0.2289 | 84.21 | 3200 | 0.5888 | 0.3893 | | 0.2091 | 89.47 | 3400 | 0.5849 | 0.3852 | | 0.2005 | 94.73 | 3600 | 0.5938 | 0.3875 | | 0.1876 | 99.99 | 3800 | 0.5997 | 0.3870 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
shpotes/xls-r-et-cv_8_0
shpotes
2022-03-24T11:56:18Z
10
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "et", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - et license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - robust-speech-event - et - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: xls-r-et-cv_8_0 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: et metrics: - name: Test WER type: wer value: 0.34180826781638346 - name: Test CER type: cer value: 0.07356192733576256 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: et metrics: - name: Test WER type: wer value: 34.18 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: et metrics: - name: Test WER type: wer value: 45.53 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: et metrics: - name: Test WER type: wer value: 54.41 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ET dataset. It achieves the following results on the evaluation set: - Loss: 0.4623 - Wer: 0.3420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 72 - eval_batch_size: 72 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3082 | 12.5 | 500 | 0.3871 | 0.4907 | | 0.1497 | 25.0 | 1000 | 0.4168 | 0.4278 | | 0.1243 | 37.5 | 1500 | 0.4446 | 0.4220 | | 0.0954 | 50.0 | 2000 | 0.4426 | 0.3946 | | 0.0741 | 62.5 | 2500 | 0.4502 | 0.3800 | | 0.0533 | 75.0 | 3000 | 0.4618 | 0.3653 | | 0.0447 | 87.5 | 3500 | 0.4518 | 0.3461 | | 0.0396 | 100.0 | 4000 | 0.4623 | 0.3420 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
sammy786/wav2vec2-xlsr-interlingua
sammy786
2022-03-24T11:56:13Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ia", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ia license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - ia - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: sammy786/wav2vec2-xlsr-interlingua results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: ia metrics: - name: Test WER type: wer value: 16.81 - name: Test CER type: cer value: 4.76 --- # sammy786/wav2vec2-xlsr-interlingua This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ia dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 5.44 - Wer: 19.78 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 200 | 4.649200 | 0.483339 | 0.511322 | | 400 | 0.764700 | 0.133428 | 0.251288 | | 600 | 0.563700 | 0.099292 | 0.227745 | | 800 | 0.438800 | 0.087545 | 0.217445 | | 1000 | 0.406800 | 0.072313 | 0.213848 | | 1200 | 0.237500 | 0.066965 | 0.213766 | | 1400 | 0.177800 | 0.064419 | 0.208126 | | 1600 | 0.157100 | 0.065962 | 0.214011 | | 1800 | 0.146600 | 0.059477 | 0.202076 | | 2000 | 0.132800 | 0.055015 | 0.201831 | | 2200 | 0.122000 | 0.055421 | 0.201749 | | 2400 | 0.115700 | 0.054462 | 0.197826 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-interlingua --dataset mozilla-foundation/common_voice_8_0 --config ia --split test ```
sammy786/wav2vec2-xlsr-georgian
sammy786
2022-03-24T11:56:11Z
7
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ka", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ka license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - ka - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: sammy786/wav2vec2-xlsr-czech results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: ka metrics: - name: Test WER type: wer value: 23.9 - name: Test CER type: cer value: 3.59 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: ka metrics: - name: Test WER type: wer value: 75.07 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ka metrics: - name: Test WER type: wer value: 74.41 --- # sammy786/wav2vec2-xlsr-georgian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ka dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 10.54 - Wer: 27.53 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 8 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |:----:|:-------------:|:---------------:|:--------:| | 200 | 4.152100 | 0.823672 | 0.967814 | | 400 | 0.889500 | 0.196740 | 0.444792 | | 600 | 0.493700 | 0.155659 | 0.366115 | | 800 | 0.328000 | 0.138066 | 0.358069 | | 1000 | 0.260600 | 0.119236 | 0.324989 | | 1200 | 0.217200 | 0.114050 | 0.313366 | | 1400 | 0.188800 | 0.112600 | 0.302190 | | 1600 | 0.166900 | 0.111154 | 0.295485 | | 1800 | 0.155500 | 0.109963 | 0.286544 | | 2000 | 0.140400 | 0.107587 | 0.277604 | | 2200 | 0.142600 | 0.105662 | 0.277157 | | 2400 | 0.135400 | 0.105414 | 0.275369 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-georgian --dataset mozilla-foundation/common_voice_8_0 --config ka --split test ```
samitizerxu/wav2vec2-xls-r-300m-es
samitizerxu
2022-03-24T11:56:03Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "es", "robust-speech-event", "hf-asr-leaderboard", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer - es - robust-speech-event - hf-asr-leaderboard datasets: - common_voice model-index: - name: wav2vec2-cls-r-300m-es results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: es metrics: - name: Test WER type: wer value: 37.37 - name: Test CER type: cer value: 7.11 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: es metrics: - name: Test WER type: wer value: 55.69 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: es metrics: - name: Test WER type: wer value: 57.28 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-cls-r-300m-es This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ES dataset. It achieves the following results on the evaluation set: - Loss: 0.5160 - Wer: 0.4016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1277 | 1.14 | 500 | 2.0259 | 0.9999 | | 1.4111 | 2.28 | 1000 | 1.1251 | 0.8894 | | 0.8461 | 3.42 | 1500 | 0.8205 | 0.7244 | | 0.5042 | 4.57 | 2000 | 0.6116 | 0.5463 | | 0.3072 | 5.71 | 2500 | 0.5507 | 0.4506 | | 0.2181 | 6.85 | 3000 | 0.5213 | 0.4177 | | 0.1608 | 7.99 | 3500 | 0.5161 | 0.4019 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-es --dataset mozilla-foundation/common_voice_7_0 --config es --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-es --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
ravirajoshi/wav2vec2-large-xls-r-300m-hindi
ravirajoshi
2022-03-24T11:56:00Z
22
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "hi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - generated_from_trainer - robust-speech-event - hf-asr-leaderboard model-index: - name: wav2vec2-large-xls-r-300m-hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7049 - Wer: 0.3200
jcmc/wav2vec-cv7-1b-ir
jcmc
2022-03-24T11:55:47Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ga-IE", "robust-speech-event", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - ga-IE - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: wav2vec-cv7-1b-ir results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: ga-IE metrics: - name: Test WER type: wer value: 39.1 - name: Test CER type: cer value: 16.4 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 0.9562 - Wer: 0.4801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.3731 | 15.62 | 500 | 1.5517 | 0.9499 | | 1.3312 | 31.25 | 1000 | 0.8717 | 0.6189 | | 0.9135 | 46.86 | 1500 | 0.8299 | 0.5310 | | 0.6719 | 62.49 | 2000 | 0.8842 | 0.5044 | | 0.5583 | 78.12 | 2500 | 0.9093 | 0.4801 | | 0.4728 | 93.74 | 3000 | 0.9488 | 0.4813 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
jcmc/wav2vec-1b-cv8-ir
jcmc
2022-03-24T11:55:44Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ga-IE", "robust-speech-event", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - ga-IE - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec-1b-cv8-ir results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: ga-IE metrics: - name: Test WER type: wer value: 43.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 0.8445 - Wer: 0.5585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7135 | 31.24 | 500 | 0.9609 | 0.6926 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0