modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sebastian-hofstaetter/uni-colberter-128-1-msmarco
|
sebastian-hofstaetter
| 2022-03-27T15:21:06Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"ColBERT",
"bag-of-words",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2203.13088",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-27T15:09:28Z |
---
license: apache-2.0
language: "en"
tags:
- bag-of-words
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Uni-ColBERTer (Dim: 1) for Passage Retrieval
If you want to know more about our (Uni-)ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉
For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter
## Limitations & Bias
- The model is only trained on english text.
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@article{Hofstaetter2022_colberter,
author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury},
title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction},
publisher = {arXiv},
url = {https://arxiv.org/abs/2203.13088},
doi = {10.48550/ARXIV.2203.13088},
year = {2022},
}
```
|
sebastian-hofstaetter/colberter-128-32-msmarco
|
sebastian-hofstaetter
| 2022-03-27T15:07:44Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"ColBERT",
"bag-of-words",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2203.13088",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-27T14:59:48Z |
---
license: apache-2.0
language: "en"
tags:
- bag-of-words
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# ColBERTer (Dim: 32) for Passage Retrieval
If you want to know more about our ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉
For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter
## Limitations & Bias
- The model is only trained on english text.
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@article{Hofstaetter2022_colberter,
author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury},
title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction},
publisher = {arXiv},
url = {https://arxiv.org/abs/2203.13088},
doi = {10.48550/ARXIV.2203.13088},
year = {2022},
}
```
|
perevalov/query-validation-rubq
|
perevalov
| 2022-03-27T14:14:28Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"kgqa",
"question answering",
"sparql",
"bert-base-cased",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-27T10:52:12Z |
---
language: en
tags:
- kgqa
- question answering
- sparql
- bert-base-cased
license: apache-2.0
---
# SPARQL Query Validation model
## Model description
## Intended uses & limitations
### How to use
|
EMBO/sd-smallmol-roles
|
EMBO
| 2022-03-27T13:28:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"token classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-19T11:14:58Z |
---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-smallmol-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It has then been fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `SMALL_MOL_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of small molecules with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-smallmol-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: SMALL_MOL_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.33
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.76 0.90 0.83 2946
MEASURED_VAR 0.60 0.71 0.65 852
micro avg 0.73 0.86 0.79 3798
macro avg 0.68 0.80 0.74 3798
weighted avg 0.73 0.86 0.79 3798
{'test_loss': 0.011743436567485332, 'test_accuracy_score': 0.9951612532624371, 'test_precision': 0.7261345852895149, 'test_recall': 0.8551869404949973, 'test_f1': 0.7853947527505744, 'test_runtime': 58.0378, 'test_samples_per_second': 123.678, 'test_steps_per_second': 1.947}
```
|
EMBO/sd-geneprod-roles
|
EMBO
| 2022-03-27T13:23:03Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"token classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-19T10:38:53Z |
---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-geneprod-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `GENEPROD_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of gene products (genes and proteins) with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-geneprod-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: GENEPROD_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.9
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.81 0.86 0.83 7835
MEASURED_VAR 0.82 0.85 0.84 9330
micro avg 0.82 0.85 0.83 17165
macro avg 0.82 0.85 0.83 17165
weighted avg 0.82 0.85 0.83 17165
{'test_loss': 0.03846803680062294, 'test_accuracy_score': 0.9854472664459946, 'test_precision': 0.8156312625250501, 'test_recall': 0.8535974366443344, 'test_f1': 0.8341825841897008, 'test_runtime': 58.7369, 'test_samples_per_second': 122.206, 'test_steps_per_second': 1.924}
```
|
Danik51002/Example
|
Danik51002
| 2022-03-27T08:55:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T07:58:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: Example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Example
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 300
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
PaddyP/distilbert-base-uncased-finetuned-emotion
|
PaddyP
| 2022-03-27T07:06:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T06:12:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.922
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3344 | 0.903 | 0.9004 |
| No log | 2.0 | 500 | 0.2302 | 0.922 | 0.9218 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Ayham/roberta_roberta_summarization_xsum
|
Ayham
| 2022-03-27T00:55:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-26T19:07:42Z |
---
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: roberta_roberta_summarization_xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_roberta_summarization_xsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
MONAI/example_spleen_segmentation
|
MONAI
| 2022-03-27T00:32:20Z | 0 | 6 | null |
[
"monai",
"arxiv:1811.12506",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
tags:
- monai
---
# Description
A pre-trained model for volumetric (3D) segmentation of the spleen from CT image.
# Model Overview
This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images.
## Data
The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/.
## Training configuration
The training was performed with at least 12GB-memory GPUs.
Actual Model Input: 96 x 96 x 96
## Input and output formats
Input: 1 channel CT image
Output: 2 channels: Label 1: spleen; Label 0: everything else
## Scores
This model achieves the following Dice score on the validation data (our own split from the training dataset):
Mean Dice = 0.96
## commands example
Execute inference:
```
python -m monai.bundle run evaluator --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
```
Verify the metadata format:
```
python -m monai.bundle verify_metadata --meta_file configs/metadata.json --filepath eval/schema.json
```
Verify the data shape of network:
```
python -m monai.bundle verify_net_in_out network_def --meta_file configs/metadata.json --config_file configs/inference.json
```
Export checkpoint to TorchScript file:
```
python -m monai.bundle export network_def --filepath models/model.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json
```
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506.
[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40
|
huggingtweets/mkobach-naval-shaneaparrish
|
huggingtweets
| 2022-03-27T00:07:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T00:04:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mkobach-naval-shaneaparrish/1648339620049/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1253758424292171778/48gD7Hne_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Shane Parrish & Naval</div>
<div style="text-align: center; font-size: 14px;">@mkobach-naval-shaneaparrish</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Kobach & Shane Parrish & Naval.
| Data | Matthew Kobach | Shane Parrish | Naval |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 3197 | 3249 |
| Retweets | 135 | 102 | 181 |
| Short tweets | 444 | 147 | 617 |
| Tweets kept | 2669 | 2948 | 2451 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17cy2tt4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mkobach-naval-shaneaparrish's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mkobach-naval-shaneaparrish')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Mnauel/wav2vec2-base-finetuned-ks
|
Mnauel
| 2022-03-26T20:53:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-12T10:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5766
- Accuracy: 0.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7247 | 0.7462 |
| No log | 2.0 | 14 | 0.6844 | 0.7615 |
| 0.4279 | 3.0 | 21 | 0.7254 | 0.7462 |
| 0.4279 | 4.0 | 28 | 0.5891 | 0.8 |
| 0.4279 | 5.0 | 35 | 0.6991 | 0.7462 |
| 0.4478 | 6.0 | 42 | 0.6579 | 0.7615 |
| 0.4478 | 7.0 | 49 | 0.6164 | 0.8 |
| 0.4478 | 8.0 | 56 | 0.6191 | 0.8077 |
| 0.4194 | 9.0 | 63 | 0.5766 | 0.8308 |
| 0.4194 | 10.0 | 70 | 0.5704 | 0.8154 |
| 0.4194 | 11.0 | 77 | 0.6518 | 0.8 |
| 0.3833 | 12.0 | 84 | 0.6190 | 0.8077 |
| 0.3833 | 13.0 | 91 | 0.5693 | 0.8231 |
| 0.3833 | 14.0 | 98 | 0.5628 | 0.8231 |
| 0.3607 | 15.0 | 105 | 0.5741 | 0.8154 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
|
dannyvas23
| 2022-03-26T19:22:14Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T17:19:56Z |
---
license: afl-3.0
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "La vida no merece la pena"
example_title: "Ejemplo 1"
- text: "Para vivir así lo mejor es estar muerto"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
- text: "Quiero terminar con todo"
example_title: "Ejemplo 4"
- text: "Disfruto de la vista"
example_title: "Ejemplo 5"
metrics:
- accuracy
model-index:
- name: electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0458
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.161100 | 1.0 | 0.133057 | 0.952718 |
| 0.134500 | 2.0 | 0.110966 | 0.960804 |
| 0.108500 | 3.0 | 0.086417 | 0.970835 |
| 0.099400 | 4.0 | 0.073618 | 0.974856 |
| 0.090500 | 5.0 | 0.065231 | 0.979629 |
| 0.080700 | 6.0 | 0.060849 | 0.982324 |
| 0.069200 | 7.0 | 0.054718 | 0.986125 |
| 0.060400 | 8.0 | 0.051153 | 0.985948 |
| 0.048200 | 9.0 | 0.045747 | 0.989748 |
| 0.045500 | 10.0 | 0.049992 | 0.988069 |
| 0.043400 | 11.0 | 0.046325 | 0.990234 |
| 0.034300 | 12.0 | 0.050746 | 0.989792 |
| 0.032900 | 13.0 | 0.043434 | 0.991737 |
| 0.028400 | 14.0 | 0.045003 | 0.991869 |
| 0.022300 | 15.0 | 0.045819 | 0.991648 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review
|
dannyvas23
| 2022-03-26T17:12:23Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T19:26:40Z |
---
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "no me gusta esta vida."
example_title: "Ejemplo 1"
- text: "odio estar ahi"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
metrics:
- accuracy
model-index:
- name: clasificacion-texto-suicida-finetuned-amazon-review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificacion-texto-suicida-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Accuracy: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilbert1000e
|
bigmorning
| 2022-03-26T15:31:46Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-26T15:27:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert1000e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert1000e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
uw-madison/nystromformer-1024
|
uw-madison
| 2022-03-26T14:58:18Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"nystromformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-23T18:56:40Z |
Nystromformer for sequence length 1024 trained on WikiText-103 v1 for 150 epochs.
|
scasutt/wav2vec2-base_toy_train_data_masked_audio_10ms
|
scasutt
| 2022-03-26T14:57:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T12:28:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_masked_audio_10ms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_masked_audio_10ms
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2477
- Wer: 0.7145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1337 | 1.05 | 250 | 3.4081 | 0.9982 |
| 3.0792 | 2.1 | 500 | 3.2446 | 0.9982 |
| 2.0577 | 3.15 | 750 | 1.5839 | 0.9492 |
| 1.3639 | 4.2 | 1000 | 1.3279 | 0.8798 |
| 1.0814 | 5.25 | 1250 | 1.1629 | 0.8294 |
| 0.8722 | 6.3 | 1500 | 1.1305 | 0.8140 |
| 0.7602 | 7.35 | 1750 | 1.1241 | 0.7972 |
| 0.6982 | 8.4 | 2000 | 1.1429 | 0.7780 |
| 0.6494 | 9.45 | 2250 | 1.1047 | 0.7620 |
| 0.5924 | 10.5 | 2500 | 1.1756 | 0.7649 |
| 0.5385 | 11.55 | 2750 | 1.2230 | 0.7736 |
| 0.5026 | 12.6 | 3000 | 1.1783 | 0.7472 |
| 0.4973 | 13.65 | 3250 | 1.1613 | 0.7287 |
| 0.4726 | 14.7 | 3500 | 1.1923 | 0.7345 |
| 0.4521 | 15.75 | 3750 | 1.2153 | 0.7171 |
| 0.4552 | 16.81 | 4000 | 1.2485 | 0.7226 |
| 0.422 | 17.86 | 4250 | 1.2664 | 0.7240 |
| 0.3708 | 18.91 | 4500 | 1.2352 | 0.7148 |
| 0.3516 | 19.96 | 4750 | 1.2477 | 0.7145 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilbert500e
|
bigmorning
| 2022-03-26T14:54:50Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-26T14:48:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert500e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert500e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
rahulacj/mbart-large-cc25-finetuned-hi-to-en
|
rahulacj
| 2022-03-26T14:06:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-18T10:19:49Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-cc25-finetuned-hi-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4710
- Bleu: 16.6154
- Gen Len: 42.6244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.5705 | 1.0 | 3955 | 1.4858 | 14.8984 | 47.6759 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
zuppif/versioning-test
|
zuppif
| 2022-03-26T13:35:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-26T13:34:47Z |
| | uid | hidden_size |
|---:|:------------------------------------------------------------------------------------------------------------------------|--------------:|
| 0 | [e87a4e028b11ec7bf770c6f3ab5c6349](https://huggingface.co/zuppif/versioning-test/tree/e87a4e028b11ec7bf770c6f3ab5c6349) | 8 |
| 1 | [48f2a327cfb7cb0f9b519d9abf73a9be](https://huggingface.co/zuppif/versioning-test/tree/48f2a327cfb7cb0f9b519d9abf73a9be) | 16 |
| 2 | [1c9d18df9ec06b5f7e2f49b2ef1cb826](https://huggingface.co/zuppif/versioning-test/tree/1c9d18df9ec06b5f7e2f49b2ef1cb826) | 32 |
|
Mr-Wick/Roberta
|
Mr-Wick
| 2022-03-26T12:39:55Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T16:08:46Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Roberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16476, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-base_toy_train_data_fast_10pct
|
scasutt
| 2022-03-26T12:28:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T10:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_fast_10pct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_fast_10pct
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3087
- Wer: 0.7175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1309 | 1.05 | 250 | 3.4541 | 0.9982 |
| 3.0499 | 2.1 | 500 | 3.0231 | 0.9982 |
| 1.4839 | 3.15 | 750 | 1.4387 | 0.9257 |
| 1.1697 | 4.2 | 1000 | 1.3729 | 0.8792 |
| 0.9353 | 5.25 | 1250 | 1.2608 | 0.8445 |
| 0.7298 | 6.3 | 1500 | 1.1867 | 0.8052 |
| 0.6418 | 7.35 | 1750 | 1.2414 | 0.7997 |
| 0.5698 | 8.4 | 2000 | 1.2240 | 0.7766 |
| 0.5084 | 9.45 | 2250 | 1.1910 | 0.7687 |
| 0.4912 | 10.5 | 2500 | 1.2241 | 0.7617 |
| 0.4144 | 11.55 | 2750 | 1.2412 | 0.7477 |
| 0.4153 | 12.6 | 3000 | 1.2736 | 0.7511 |
| 0.405 | 13.65 | 3250 | 1.2827 | 0.7328 |
| 0.3852 | 14.7 | 3500 | 1.1981 | 0.7331 |
| 0.3829 | 15.75 | 3750 | 1.3035 | 0.7347 |
| 0.3538 | 16.81 | 4000 | 1.3003 | 0.7240 |
| 0.3385 | 17.86 | 4250 | 1.3354 | 0.7304 |
| 0.3108 | 18.91 | 4500 | 1.2983 | 0.7229 |
| 0.3037 | 19.96 | 4750 | 1.3087 | 0.7175 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
peterhsu/bert-finetuned-squad
|
peterhsu
| 2022-03-26T08:48:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-21T13:26:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-rnd-regularisation
|
sanchit-gandhi
| 2022-03-26T06:45:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T10:13:09Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6977
- Wer: 0.1231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1467 | 1.68 | 1500 | 6.0558 | 1.3243 |
| 5.4388 | 3.36 | 3000 | 5.4711 | 1.5604 |
| 3.3434 | 5.04 | 4500 | 3.4808 | 0.7461 |
| 1.5259 | 6.73 | 6000 | 2.1931 | 0.3430 |
| 1.4285 | 8.41 | 7500 | 1.5883 | 0.2784 |
| 1.0687 | 10.09 | 9000 | 1.2481 | 0.2069 |
| 0.6425 | 11.77 | 10500 | 1.0507 | 0.1758 |
| 0.7147 | 13.45 | 12000 | 0.9397 | 0.1584 |
| 0.5083 | 15.13 | 13500 | 0.8452 | 0.1453 |
| 0.4287 | 16.82 | 15000 | 0.7915 | 0.1388 |
| 0.3499 | 18.5 | 16500 | 0.7477 | 0.1315 |
| 0.3733 | 20.18 | 18000 | 0.7307 | 0.1287 |
| 0.2609 | 21.86 | 19500 | 0.7061 | 0.1263 |
| 0.2602 | 23.54 | 21000 | 0.6977 | 0.1231 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lichenda/chime4_fasnet_dprnn_tac
|
lichenda
| 2022-03-26T05:52:41Z | 36 | 0 |
espnet
|
[
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-21T08:18:15Z |
---
tags:
- espnet
- audio
- audio-to-audio
language: noinfo
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `lichenda/chime4_fasnet_dprnn_tac`
This model was trained by LiChenda using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 98f5fb2185b98f9c08fd56492b3d3234504561e7
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model lichenda/chime4_fasnet_dprnn_tac
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Sat Mar 19 07:17:45 CST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `648b024d8fb262eb9923c06a698b9c6df5b16e51`
- Commit date: `Wed Mar 16 18:47:21 2022 +0800`
## ..
config: conf/tuning/train_enh_dprnntac_fasnet.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_dt05_simu_isolated_6ch_track|0.95|15.75|15.75|0.00|
|enhanced_et05_simu_isolated_6ch_track|0.94|15.40|15.40|0.00|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_dprnntac_fasnet.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dprnntac_fasnet_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: steplr
scheduler_conf:
step_size: 2
gamma: 0.98
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
criterions:
- name: si_snr
conf:
eps: 1.0e-07
wrapper: fixed_order
wrapper_conf:
weight: 1.0
use_preprocessor: false
encoder: same
encoder_conf: {}
separator: fasnet
separator_conf:
enc_dim: 64
feature_dim: 64
hidden_dim: 128
layer: 6
segment_size: 24
num_spk: 1
win_len: 16
context_len: 16
sr: 16000
fasnet_type: fasnet
dropout: 0.2
decoder: same
decoder_conf: {}
required:
- output_dir
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
calebcsjm/reversed_harrypotter_generation
|
calebcsjm
| 2022-03-26T05:02:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T20:58:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reversed_harrypotter_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reversed_harrypotter_generation
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/atarifounders
|
huggingtweets
| 2022-03-26T03:45:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-10T18:31:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div>
<div style="text-align: center; font-size: 14px;">@atarifounders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from koala/claw/soppy.
| Data | koala/claw/soppy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 129 |
| Short tweets | 883 |
| Tweets kept | 2227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atarifounders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/_stevenshoe-mkobach
|
huggingtweets
| 2022-03-25T22:23:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T22:08:01Z |
---
language: en
thumbnail: http://www.huggingtweets.com/_stevenshoe-mkobach/1648247026634/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505053150478229505/wAa1lc04_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Steven Shoemaker</div>
<div style="text-align: center; font-size: 14px;">@_stevenshoe-mkobach</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Kobach & Steven Shoemaker.
| Data | Matthew Kobach | Steven Shoemaker |
| --- | --- | --- |
| Tweets downloaded | 3242 | 1319 |
| Retweets | 136 | 56 |
| Short tweets | 443 | 125 |
| Tweets kept | 2663 | 1138 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/48je6le3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_stevenshoe-mkobach's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oih18qf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oih18qf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_stevenshoe-mkobach')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/huggingpuppy
|
huggingtweets
| 2022-03-25T18:42:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T18:41:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/huggingpuppy/1648233768787/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504530325526900756/QOTZak3q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">hug. (INGROUP INTERN)</div>
<div style="text-align: center; font-size: 14px;">@huggingpuppy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from hug. (INGROUP INTERN).
| Data | hug. (INGROUP INTERN) |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 97 |
| Short tweets | 816 |
| Tweets kept | 2336 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wq0kiqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @huggingpuppy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/huggingpuppy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
scasutt/wav2vec2-base_toy_train_data_augment_0.1
|
scasutt
| 2022-03-25T17:44:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T14:40:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3786
- Wer: 0.9954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1342 | 1.05 | 250 | 3.3901 | 0.9954 |
| 3.0878 | 2.1 | 500 | 3.4886 | 0.9954 |
| 3.0755 | 3.15 | 750 | 3.4616 | 0.9954 |
| 3.0891 | 4.2 | 1000 | 3.5316 | 0.9954 |
| 3.0724 | 5.25 | 1250 | 3.2608 | 0.9954 |
| 3.0443 | 6.3 | 1500 | 3.3881 | 0.9954 |
| 3.0421 | 7.35 | 1750 | 3.4507 | 0.9954 |
| 3.0448 | 8.4 | 2000 | 3.4525 | 0.9954 |
| 3.0455 | 9.45 | 2250 | 3.3342 | 0.9954 |
| 3.0425 | 10.5 | 2500 | 3.3385 | 0.9954 |
| 3.0457 | 11.55 | 2750 | 3.4411 | 0.9954 |
| 3.0375 | 12.6 | 3000 | 3.4459 | 0.9954 |
| 3.0459 | 13.65 | 3250 | 3.3883 | 0.9954 |
| 3.0455 | 14.7 | 3500 | 3.3417 | 0.9954 |
| 3.0524 | 15.75 | 3750 | 3.3908 | 0.9954 |
| 3.0443 | 16.81 | 4000 | 3.3932 | 0.9954 |
| 3.0446 | 17.86 | 4250 | 3.4052 | 0.9954 |
| 3.0412 | 18.91 | 4500 | 3.3776 | 0.9954 |
| 3.0358 | 19.96 | 4750 | 3.3786 | 0.9954 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
manandey/wav2vec2-large-xlsr-tamil
|
manandey
| 2022-03-25T16:52:49Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ta",
"dataset:common_voice",
"doi:10.57967/hf/0191",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ta
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Tamil by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 56.44
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.44%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
Wende/bert-finetuned-ner
|
Wende
| 2022-03-25T16:19:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T15:21:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9321670242614293
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9412548954253812
- name: Accuracy
type: accuracy
value: 0.9860334373344322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Precision: 0.9322
- Recall: 0.9505
- F1: 0.9413
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2219 | 1.0 | 878 | 0.0716 | 0.9076 | 0.9288 | 0.9181 | 0.9808 |
| 0.0453 | 2.0 | 1756 | 0.0597 | 0.9297 | 0.9477 | 0.9386 | 0.9852 |
| 0.0239 | 3.0 | 2634 | 0.0575 | 0.9322 | 0.9505 | 0.9413 | 0.9860 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ianMconversica/autotrain-parrot_finetune_v1-667919695
|
ianMconversica
| 2022-03-25T15:41:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:McIan91/autotrain-data-parrot_finetune_v1",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-25T12:27:52Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- McIan91/autotrain-data-parrot_finetune_v1
co2_eq_emissions: 207.64739623144084
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 667919695
- CO2 Emissions (in grams): 207.64739623144084
## Validation Metrics
- Loss: 0.06461456418037415
- Rouge1: 70.5184
- Rouge2: 66.9204
- RougeL: 70.4464
- RougeLsum: 70.4705
- Gen Len: 18.5385
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/McIan91/autotrain-parrot_finetune_v1-667919695
```
|
Rocketknight1/temp-colab-upload-test4
|
Rocketknight1
| 2022-03-25T15:06:46Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-25T15:06:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/temp-colab-upload-test4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test4
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Validation Loss: 0.0000
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0000 | 0.0000 | 0 |
| 0.0000 | 0.0000 | 1 |
### Framework versions
- Transformers 4.18.0.dev0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/rivatez
|
huggingtweets
| 2022-03-25T14:57:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T14:51:51Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rivatez/1648220244511/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421403684085374979/SoqYa6o3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riva</div>
<div style="text-align: center; font-size: 14px;">@rivatez</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riva.
| Data | Riva |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 780 |
| Short tweets | 405 |
| Tweets kept | 1993 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qe0i10s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rivatez's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rivatez')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
azizbarank/mbert-finnic-ner
|
azizbarank
| 2022-03-25T13:55:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T12:43:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mbert-finnic-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finnic-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the Finnish and Estonian parts of the "WikiANN" dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Precision: 0.9090
- Recall: 0.9156
- F1: 0.9123
- Accuracy: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1636 | 1.0 | 2188 | 0.1385 | 0.8906 | 0.9000 | 0.8953 | 0.9601 |
| 0.0991 | 2.0 | 4376 | 0.1346 | 0.9099 | 0.9095 | 0.9097 | 0.9660 |
| 0.0596 | 3.0 | 6564 | 0.1427 | 0.9090 | 0.9156 | 0.9123 | 0.9672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ssardorf/pegasus-xsum-new-dataset
|
ssardorf
| 2022-03-25T13:12:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-25T13:07:00Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-xsum-new-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-new-dataset
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Rouge1: 48.7306
- Rouge2: 34.1291
- Rougel: 44.0778
- Rougelsum: 45.7139
- Gen Len: 30.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.6
|
vumichien/mobilebert-uncased-squad-v2
|
vumichien
| 2022-03-25T13:09:07Z | 72 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T13:07:29Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: tf-mobilebert-uncased-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-mobilebert-uncased-squad-v2
This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
vumichien/emo-mobilebert
|
vumichien
| 2022-03-25T13:01:20Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-25T12:59:51Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tf-emo-mobilebert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-emo-mobilebert
This model is a fine-tuned version of [lordtt13/emo-mobilebert](https://huggingface.co/lordtt13/emo-mobilebert) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
vumichien/albert-base-v2-squad2
|
vumichien
| 2022-03-25T12:48:59Z | 76 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T12:48:04Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tf-albert-base-v2-squad2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-albert-base-v2-squad2
This model is a fine-tuned version of [twmkn9/albert-base-v2-squad2](https://huggingface.co/twmkn9/albert-base-v2-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
vumichien/tf-albert-base-v2
|
vumichien
| 2022-03-25T12:42:56Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T12:42:00Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tf-albert-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-albert-base-v2
This model is a fine-tuned version of [vumichien/albert-base-v2](https://huggingface.co/vumichien/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
bigmorning/try-m-e
|
bigmorning
| 2022-03-25T12:37:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T06:42:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: try-m-e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# try-m-e
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
|
scasutt
| 2022-03-25T12:18:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T11:45:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4695
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.2456 | 0.84 | 200 | 3.6215 | 1.0 |
| 3.0637 | 1.68 | 400 | 3.3918 | 1.0 |
| 3.046 | 2.52 | 600 | 3.4168 | 1.0 |
| 3.0627 | 3.36 | 800 | 3.4695 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
microsoft/wavlm-base-sd
|
microsoft
| 2022-03-25T12:05:11Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-frame-classification",
"speech",
"en",
"arxiv:2110.13900",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- speech
---
# WavLM-Base for Speaker Diarization
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on 960h of [Librispeech](https://huggingface.co/datasets/librispeech_asr).
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, WavLMForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-sd')
model = WavLMForAudioFrameClassification.from_pretrained('microsoft/wavlm-base-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)

|
Jezia/pytorch-pretrained-BigGAN
|
Jezia
| 2022-03-25T10:53:53Z | 0 | 2 |
pytorch
|
[
"pytorch",
"biggan",
"dataset:ImageNet",
"license:apache-2.0",
"region:us"
] | null | 2022-03-25T10:05:00Z |
---
license: apache-2.0
library_name: pytorch
tags:
- biggan
datasets:
- ImageNet
---
## Model description
This is an op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind [biggan-deep-128](https://tfhub.dev/deepmind/biggan-deep-128/1).
## Training and evaluation data
Model is trained on [ImageNet dataset](https://tfhub.dev/s?dataset=imagenet-ilsvrc-2012-cls). The dataset consists of 10000 classes. All images are resized to 64 * 64 for the sake of convenience. The model takes noise as input and then Conv2DTranspose is used to do upsampling. The output shape consists of 128, 256, or 512 images depending on the model.
## How to use this model
You can use this model to generate new images.
```
import torch
from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample,
save_as_images, display_in_terminal)
model = BigGAN.from_pretrained('biggan-deep-256')
```
You can generate examples using a noise vector.
```
with torch.no_grad():
output = model(noise_vector, class_vector, truncation)
```
## Intended use and biases
This model is not intended for production.
### Generated images

### Credits
@thomwolf
Thomas Wolf
@vfdev-5
vfdev
|
patrickvonplaten/deberta_v3_amazon_reviews
|
patrickvonplaten
| 2022-03-25T10:46:41Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-20T18:23:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_v3_amazon_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_amazon_reviews
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
eliasws/openApiT5-distilled-description-v3
|
eliasws
| 2022-03-25T09:30:37Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-25T09:25:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5547 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1109,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
RomanEnikeev/distilbert-base-uncased-finetuned-cola
|
RomanEnikeev
| 2022-03-25T09:13:46Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-25T06:47:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5670814703238499
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8265
- Matthews Correlation: 0.5671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5216 | 1.0 | 535 | 0.5536 | 0.4041 |
| 0.3481 | 2.0 | 1070 | 0.5242 | 0.5206 |
| 0.2372 | 3.0 | 1605 | 0.6162 | 0.5311 |
| 0.1701 | 4.0 | 2140 | 0.7704 | 0.5461 |
| 0.1304 | 5.0 | 2675 | 0.8265 | 0.5671 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Mr-Wick/Albert
|
Mr-Wick
| 2022-03-25T07:59:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-24T19:29:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Mr-Wick/Albert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mr-Wick/Albert
This model is a fine-tuned version of [Mr-Wick/Albert](https://huggingface.co/Mr-Wick/Albert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4248
- Train End Logits Accuracy: 0.3423
- Train Loss Accuracy: 0.0664
- Train Start Logits Accuracy: 0.3437
- Validation Loss: 0.9468
- Validation End Logits Accuracy: 0.4724
- Validation Loss Accuracy: 0.0591
- Validation Start Logits Accuracy: 0.4772
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16494, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Loss Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Loss Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:-------------------:|:---------------------------:|:---------------:|:------------------------------:|:------------------------:|:--------------------------------:|:-----:|
| 0.6581 | 0.3488 | 0.0671 | 0.3529 | 0.9366 | 0.4415 | 0.0657 | 0.4486 | 0 |
| 0.4248 | 0.3423 | 0.0664 | 0.3437 | 0.9468 | 0.4724 | 0.0591 | 0.4772 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
csukuangfj/transducer-loss-benchmarking
|
csukuangfj
| 2022-03-25T07:46:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Introduction
This repo contains the benchmark results for <https://github.com/csukuangfj/transducer-loss-benchmarking>
## Usage
First, install `git-lfs`.
Second, use the following command to clone this repo:
```bash
git lfs install
git clone https://huggingface.co/csukuangfj/transducer-loss-benchmarking
```
**Caution**: You have to run `git lfs install` first. Otherwise, you will be **SAD** later.
Third,
```
pip install torch-tb-profiler
cd transducer-loss-benchmarking
tensorboard --logdir ./log/torchaudio-30 --port 6006
tensorboard --logdir ./log/optimized_transducer-30 --port 6007
```
Fourth, open your browser and go to
- <http://localhost:6006/#pytorch_profiler>
- <http://localhost:6006/#pytorch_profiler>
You will see the following images:


|
Ebtihal/AraBertMo_base_V9
|
Ebtihal
| 2022-03-25T07:25:05Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Arabic Model AraBertMo_base_V9
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V9' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
clisi2000/distilbert-base-uncased-finetuned-clinc
|
clisi2000
| 2022-03-25T06:23:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T05:03:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2883 | 1.0 | 318 | 3.2778 | 0.7390 |
| 2.6185 | 2.0 | 636 | 1.8740 | 0.8232 |
| 1.5423 | 3.0 | 954 | 1.1579 | 0.8890 |
| 1.0131 | 4.0 | 1272 | 0.8629 | 0.9077 |
| 0.7964 | 5.0 | 1590 | 0.7796 | 0.9158 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
|
bigmorning/try-m
|
bigmorning
| 2022-03-25T04:01:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T16:02:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: try-m
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# try-m
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter-regularisation
|
sanchit-gandhi
| 2022-03-25T03:10:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T10:13:48Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7177
- Wer: 0.1283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1228 | 1.68 | 1500 | 6.0490 | 1.1433 |
| 5.4173 | 3.36 | 3000 | 5.3453 | 1.4878 |
| 4.1635 | 5.04 | 4500 | 4.4185 | 0.9644 |
| 2.1246 | 6.73 | 6000 | 3.2089 | 0.5026 |
| 1.88 | 8.41 | 7500 | 1.9886 | 0.3438 |
| 1.2606 | 10.09 | 9000 | 1.4472 | 0.2487 |
| 0.7492 | 11.77 | 10500 | 1.1716 | 0.1949 |
| 0.8868 | 13.45 | 12000 | 1.0146 | 0.1702 |
| 0.5078 | 15.13 | 13500 | 0.8821 | 0.1548 |
| 0.4515 | 16.82 | 15000 | 0.8181 | 0.1417 |
| 0.3902 | 18.5 | 16500 | 0.7765 | 0.1364 |
| 0.3575 | 20.18 | 18000 | 0.7367 | 0.1333 |
| 0.2903 | 21.86 | 19500 | 0.7211 | 0.1301 |
| 0.2698 | 23.54 | 21000 | 0.7177 | 0.1283 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Aureliano/distilbert-base-uncased-if
|
Aureliano
| 2022-03-25T00:06:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T17:46:18Z |
---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased) for Interactive Fiction
[`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) finetuned on a dataset of Interactive
Fiction commands.
Details on the datasets can be found [here](https://github.com/aporporato/jericho-corpora).
The resulting model scored an accuracy of 0.976253 on the WordNet task test set.
## How to use the discriminator in `transformers`
```python
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = discriminator.config.id2label[tf.math.argmax(prediction).numpy()]
print(text, ":", label) # take.v.04 -> "get into one's hands, take physically"
```
## How to use the discriminator in `transformers` on a custom dataset
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModel, TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
label2id=label2id,
id2label=id2label)
discriminator.distilbert = TFAutoModel.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 20
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
## How to use in a Rasa pipeline
The model can integrated in a Rasa pipeline through
a [`LanguageModelFeaturizer`](https://rasa.com/docs/rasa/components#languagemodelfeaturizer)
```yaml
recipe: default.v1
language: en
pipeline:
# See https://rasa.com/docs/rasa/tuning-your-model for more information.
...
- name: "WhitespaceTokenizer"
...
- name: LanguageModelFeaturizer
model_name: "distilbert"
model_weights: "Aureliano/distilbert-base-uncased-if"
...
```
|
tbosse/distilbert-base-uncased-finetuned-pos
|
tbosse
| 2022-03-25T00:02:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-20T21:29:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9109037731744458
- name: Recall
type: recall
value: 0.9143515710299648
- name: F1
type: f1
value: 0.9126244157605404
- name: Accuracy
type: accuracy
value: 0.9245555785025498
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-pos
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3165
- Precision: 0.9109
- Recall: 0.9144
- F1: 0.9126
- Accuracy: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7941 | 1.0 | 878 | 0.3504 | 0.8995 | 0.9026 | 0.9011 | 0.9176 |
| 0.2533 | 2.0 | 1756 | 0.3216 | 0.9091 | 0.9104 | 0.9098 | 0.9233 |
| 0.2047 | 3.0 | 2634 | 0.3165 | 0.9109 | 0.9144 | 0.9126 | 0.9246 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
voidful/wav2vec2-large-xlsr-53-tw-gpt
|
voidful
| 2022-03-24T23:08:57Z | 37 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: zh-TW
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Taiwanese Mandarin(zh-tw) by Voidful
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-TW
type: common_voice
args: zh-TW
metrics:
- name: Test CER
type: cer
value: 18.36
---
# Wav2Vec2-Large-XLSR-53-tw-gpt
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on zh-tw using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
[Colab trial](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
AutoTokenizer,
AutoModelWithLMHead
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
gpt_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
gpt_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
gpt_prob = torch.nn.functional.softmax(gpt_model(gpt_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(gpt_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
```
Predict
```python
predict(load_file_to_data('voice file path'))
```
## Evaluation
The model can be evaluated as follows on the zh-tw test data of Common Voice.
CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
env setup:
```
!pip install editdistance
!pip install torchaudio
!pip install datasets transformers
```
## Evaluation without LM:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
```
`CER: 28.70`.
`TIME: 04:08 min`
## Evaluation with GPT:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
lm_prob = torch.nn.functional.softmax(lm_model(lm_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
```
`CER 25.70`.
`TIME: 06:04 min`
## Evaluation with GPT + beam search:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
sequences = [[[], 1.0]]
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
while True:
all_candidates = list()
exceed = False
for seq in sequences:
tokens, score = seq
gpt_input = torch.tensor([tokenizer.cls_token_id]+tokens).to(device)
gpt_prob = torch.nn.functional.softmax(lm_model(gpt_input).logits, dim=-1)[:len(gpt_input),:]
if len(gpt_input) >= len(voice_prob):
exceed = True
comb_pred_ids = gpt_prob*voice_prob[:len(gpt_input)]
v,i = torch.topk(comb_pred_ids,50,dim=-1)
for tok_id,tok_prob in zip(i.tolist()[-1],v.tolist()[-1]):
candidate = [tokens + [tok_id], score + -log(tok_prob)]
all_candidates.append(candidate)
ordered = sorted(all_candidates, key=lambda tup: tup[1])
sequences = ordered[:10]
if exceed:
break
decoded_results.append(processor.decode(sequences[0][0]))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
```
`CER 18.36`.
## Evaluation with BERT:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
mask_lm_prob = voice_prob.clone()
for i in range(lm_input.shape[-1]):
masked_lm_input = lm_input.clone()
masked_lm_input[0][i] = torch.tensor(tokenizer.mask_token_id).to('cuda')
lm_prob = torch.nn.functional.softmax(lm_model(masked_lm_input).logits, dim=-1).squeeze(0)
mask_lm_prob[i] = lm_prob[i]
comb_pred_ids = torch.argmax(mask_lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
```
`CER 25.57`.
`TIME: 09:49 min`
## Evaluation with T-TA:
setup
```
!git clone https://github.com/voidful/pytorch-tta.git
!mv ./pytorch-tta/tta ./tta
!wget https://github.com/voidful/pytorch-tta/releases/download/wiki_zh/wiki_zh.pt
```
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from tta.modeling_tta import TTALMModel
from transformers import AutoTokenizer
import torch
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = TTALMModel("bert-base-chinese")
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model.load_state_dict(torch.load("./wiki_zh.pt",map_location=torch.device('cuda')))
lm_model.to('cuda')
lm_model.eval()
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
lm_prob = torch.nn.functional.softmax(lm_model.forward(lm_input)[0], dim=-1).squeeze(0)
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
```
`CER: 25.77`.
`TIME: 06:01 min`
|
Paul-Vinh/bert-base-multilingual-cased-finetuned-squad
|
Paul-Vinh
| 2022-03-24T22:47:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-24T19:22:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9982 | 1.0 | 5555 | 0.9436 |
| 0.7694 | 2.0 | 11110 | 0.9356 |
| 0.5627 | 3.0 | 16665 | 1.0122 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cdinh2022/distilbert-base-uncased-finetuned-emotion
|
cdinh2022
| 2022-03-24T21:44:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T21:15:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.1 | 25 | 1.4889 | 0.5195 | 0.3976 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mp6kv/pump_intent_test
|
mp6kv
| 2022-03-24T18:40:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-05T19:25:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pump_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pump_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these three categories.
These three categories are the subcategories of Pump - essentially when a user asks a question and expects an answer in response
- Value: a slot value or a calculation
- Clarification: Asking for further information on a previous answer
- Testing: Testing for knowledge of facts and definitions
Takes a user input of string text and classifies it according to one of three categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/pump_intent_test")
output = classifier("What is the value of the length of the blue object?")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
huggingtweets/janieclone-wretched_worm
|
huggingtweets
| 2022-03-24T16:50:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T16:50:13Z |
---
language: en
thumbnail: http://www.huggingtweets.com/janieclone-wretched_worm/1648140650284/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478043369578266624/vWL3TXE0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504460028270501895/uqbdF11C_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wretched worm & Columbine Janie</div>
<div style="text-align: center; font-size: 14px;">@janieclone-wretched_worm</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wretched worm & Columbine Janie.
| Data | wretched worm | Columbine Janie |
| --- | --- | --- |
| Tweets downloaded | 3226 | 544 |
| Retweets | 313 | 197 |
| Short tweets | 572 | 60 |
| Tweets kept | 2341 | 287 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jmx6vuf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janieclone-wretched_worm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kpqts6sn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kpqts6sn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janieclone-wretched_worm')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
optimum/distilbert-base-uncased-mnli
|
optimum
| 2022-03-24T16:19:12Z | 11 | 3 |
transformers
|
[
"transformers",
"onnx",
"text-classification",
"distilbert",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-24T16:17:54Z |
---
language: en
pipeline_tag: zero-shot-classification
tags:
- distilbert
datasets:
- multi_nli
metrics:
- accuracy
---
# ONNX convert typeform/distilbert-base-uncased-mnli
## Conversion of [typeform/distilbert-base-uncased-mnli](typeform/distilbert-base-uncased-mnli)
This is the [uncased DistilBERT model](https://huggingface.co/distilbert-base-uncased) fine-tuned on [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MNLI) dataset for the zero-shot classification task. The model is not case-sensitive, i.e., it does not make a difference between "english" and "English".
## Training
Training is done on a [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) AWS EC2 instance (1 NVIDIA Tesla V100 GPUs), with the following hyperparameters:
```
$ run_glue.py \
--model_name_or_path distilbert-base-uncased \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--output_dir /tmp/distilbert-base-uncased_mnli/
```
## Evaluation results
| Task | MNLI | MNLI-mm |
|:----:|:----:|:----:|
| | 82.0 | 82.0 |
|
optimum/all-MiniLM-L6-v2
|
optimum
| 2022-03-24T16:16:57Z | 74,671 | 17 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-24T16:15:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# ONNX convert all-MiniLM-L6-v2
## Conversion of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
optimum/bert-base-NER
|
optimum
| 2022-03-24T16:14:52Z | 26 | 2 |
transformers
|
[
"transformers",
"onnx",
"token-classification",
"en",
"dataset:conll2003",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-24T16:13:41Z |
---
language: en
datasets:
- conll2003
license: mit
---
# ONNX convert of bert-base-NER
## Conversion of [bert-base-NER](https://huggingface.co/dslim/bert-base-NER)
## Model description
**bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
## Training data
This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
Dataset|LOC|MISC|ORG|PER
-|-|-|-|-
Train|7140|3438|6321|6600
Dev|1837|922|1341|1842
Test|1668|702|1661|1617
#### # of articles/sentences/tokens per dataset
Dataset |Articles |Sentences |Tokens
-|-|-|-
Train |946 |14,987 |203,621
Dev |216 |3,466 |51,362
Test |231 |3,684 |46,435
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
## Eval results
metric|dev|test
-|-|-
f1 |95.1 |91.3
precision |95.0 |90.7
recall |95.3 |91.9
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
### BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
|
huggingtweets/untiltrees
|
huggingtweets
| 2022-03-24T16:08:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T15:42:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/untiltrees/1648138126631/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1350186722596974593/lANAV_Xj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dancing Box</div>
<div style="text-align: center; font-size: 14px;">@untiltrees</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dancing Box.
| Data | Dancing Box |
| --- | --- |
| Tweets downloaded | 994 |
| Retweets | 41 |
| Short tweets | 91 |
| Tweets kept | 862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36kia24g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @untiltrees's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/untiltrees')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/temujin9
|
huggingtweets
| 2022-03-24T15:12:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/temujin9/1648134757659/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1336188666792833027/j0wP6bb0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Prose and Khans</div>
<div style="text-align: center; font-size: 14px;">@temujin9</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Prose and Khans.
| Data | Prose and Khans |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 100 |
| Short tweets | 292 |
| Tweets kept | 2855 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8v2q4o1o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temujin9's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31lwo8dx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31lwo8dx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/temujin9')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
K024/mt5-zh-ja-en-trimmed
|
K024
| 2022-03-24T14:57:22Z | 217 | 57 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"zh",
"ja",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- zh
- ja
- en
tags:
- translation
widget:
- text: "ja2zh: 吾輩は猫である。名前はまだ無い。"
license: cc-by-nc-sa-4.0
---
This model is finetuned from [mt5-base](https://huggingface.co/google/mt5-base).
The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found [here](https://gist.github.com/K024/4a100a0f4f4b07208958e0f3244da6ad).
Usage:
```python
from transformers import (
T5Tokenizer,
MT5ForConditionalGeneration,
Text2TextGenerationPipeline,
)
path = "K024/mt5-zh-ja-en-trimmed"
pipe = Text2TextGenerationPipeline(
model=MT5ForConditionalGeneration.from_pretrained(path),
tokenizer=T5Tokenizer.from_pretrained(path),
)
sentence = "ja2zh: 吾輩は猫である。名前はまだ無い。"
res = pipe(sentence, max_length=100, num_beams=4)
res[0]['generated_text']
```
Training data:
```
wikimedia-en-ja
wikimedia-en-zh
wikimedia-ja-zh
wikititles-ja-en
wikititles-zh-en
wikimatrix-ja-zh
news-commentary-en-ja
news-commentary-en-zh
news-commentary-ja-zh
ted2020-en-ja
ted2020-en-zh
ted2020-ja-zh
```
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
|
gayanin/t5-small-med-term-conditional-masking
|
gayanin
| 2022-03-24T14:54:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T20:16:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6808
- Rouge2 Precision: 0.6855
- Rouge2 Recall: 0.486
- Rouge2 Fmeasure: 0.5507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9303 | 1.0 | 15827 | 0.8262 | 0.6603 | 0.4698 | 0.5318 |
| 0.8677 | 2.0 | 31654 | 0.7679 | 0.6695 | 0.4762 | 0.539 |
| 0.8315 | 3.0 | 47481 | 0.7393 | 0.6741 | 0.4783 | 0.5418 |
| 0.7999 | 4.0 | 63308 | 0.7194 | 0.6774 | 0.4811 | 0.5448 |
| 0.7746 | 5.0 | 79135 | 0.7059 | 0.6804 | 0.4815 | 0.5459 |
| 0.7785 | 6.0 | 94962 | 0.6958 | 0.6827 | 0.4841 | 0.5485 |
| 0.7592 | 7.0 | 110789 | 0.6893 | 0.6841 | 0.4849 | 0.5494 |
| 0.745 | 8.0 | 126616 | 0.6849 | 0.6846 | 0.4852 | 0.5498 |
| 0.7443 | 9.0 | 142443 | 0.6818 | 0.6854 | 0.4865 | 0.551 |
| 0.7417 | 10.0 | 158270 | 0.6808 | 0.6855 | 0.486 | 0.5507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl_ft_logits_25k
|
beston91
| 2022-03-24T12:59:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-22T17:03:46Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_25k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_25k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 136 | 6.2712 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.583023071289062
|
huggingtweets/rronigj
|
huggingtweets
| 2022-03-24T12:47:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T12:37:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rronigj/1648126016294/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1251916496307175424/rFilH506_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rron Gjinovci</div>
<div style="text-align: center; font-size: 14px;">@rronigj</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rron Gjinovci.
| Data | Rron Gjinovci |
| --- | --- |
| Tweets downloaded | 173 |
| Retweets | 45 |
| Short tweets | 24 |
| Tweets kept | 104 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33ceg6s6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rronigj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nokbt1r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nokbt1r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rronigj')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2
|
espnet
| 2022-03-24T12:42:03Z | 1 | 0 |
espnet
|
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:DSTC2",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-24T12:03:09Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- DSTC2
license: cc-by-4.0
---
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Thant123/distilbert-base-uncased-finetuned-emotion
|
Thant123
| 2022-03-24T12:17:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T12:02:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241019999324234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8204 | 1.0 | 250 | 0.3160 | 0.9035 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2270 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
stmnk/wav2vec2-large-xls-r-300m-dutch-colab
|
stmnk
| 2022-03-24T11:58:54Z | 0 | 0 | null |
[
"nl",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- nl
license: apache-2.0
tags:
- nl
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
---
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2
|
DrishtiSharma
| 2022-03-24T11:58:51Z | 0 | 1 | null |
[
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"as",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- as
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- as
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-as-with-LM-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: []
- name: Test CER
type: cer
value: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
### Note: Files are missing. Probably, didn't get (git)pushed properly. :(
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-romansh_sursilvan
|
sammy786
| 2022-03-24T11:58:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"rm-sursilv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- rm-sursilv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- rm-sursilv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-romansh_sursilvan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 13.82
- name: Test CER
type: cer
value: 3.02
---
# sammy786/wav2vec2-xlsr-romansh_sursilvan
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-sursilv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 16.38
- Wer: 21.25
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.825500 | 2.932350 | 1.000000 |
| 400 | 1.325600 | 0.292645 | 0.415436 |
| 600 | 0.709800 | 0.219167 | 0.324451 |
| 800 | 0.576800 | 0.174390 | 0.275477 |
| 1000 | 0.538100 | 0.183737 | 0.272116 |
| 1200 | 0.475200 | 0.159078 | 0.253871 |
| 1400 | 0.420400 | 0.167277 | 0.240907 |
| 1600 | 0.393500 | 0.167216 | 0.247269 |
| 1800 | 0.407500 | 0.178282 | 0.239827 |
| 2000 | 0.374400 | 0.184590 | 0.239467 |
| 2200 | 0.382600 | 0.164106 | 0.227824 |
| 2400 | 0.363100 | 0.162543 | 0.228544 |
| 2600 | 0.199000 | 0.172903 | 0.231665 |
| 2800 | 0.150800 | 0.160117 | 0.222662 |
| 3000 | 0.101100 | 0.169553 | 0.222662 |
| 3200 | 0.104200 | 0.161056 | 0.220622 |
| 3400 | 0.096900 | 0.161562 | 0.216781 |
| 3600 | 0.092200 | 0.163880 | 0.212580 |
| 3800 | 0.089200 | 0.162288 | 0.214140 |
| 4000 | 0.076200 | 0.160470 | 0.213540 |
| 4200 | 0.087900 | 0.162827 | 0.213060 |
| 4400 | 0.066200 | 0.161096 | 0.213300 |
| 4600 | 0.076000 | 0.162060 | 0.213660 |
| 4800 | 0.071400 | 0.162045 | 0.213300 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_sursilvan --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test
```
|
sammy786/wav2vec2-xlsr-kyrgyz
|
sammy786
| 2022-03-24T11:58:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ky",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ky
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ky
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-kyrgyz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ky
metrics:
- name: Test WER
type: wer
value: 25.24
- name: Test CER
type: cer
value: 6.25
---
# sammy786/wav2vec2-xlsr-kyrgyz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ky dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 43.06
- Wer: 39.19
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 5.357800 | 2.700367 | 1.000000 |
| 400 | 1.513600 | 0.642542 | 0.598820 |
| 600 | 0.961900 | 0.530665 | 0.502739 |
| 800 | 0.776000 | 0.507709 | 0.462705 |
| 1000 | 0.646100 | 0.453115 | 0.444164 |
| 1200 | 0.581200 | 0.454797 | 0.438264 |
| 1400 | 0.437900 | 0.459389 | 0.426464 |
| 1600 | 0.348600 | 0.401247 | 0.416351 |
| 1800 | 0.312800 | 0.436135 | 0.409608 |
| 2000 | 0.294100 | 0.440911 | 0.398651 |
| 2200 | 0.281400 | 0.432729 | 0.394016 |
| 2400 | 0.258400 | 0.429860 | 0.393595 |
| 2600 | 0.263700 | 0.432689 | 0.395280 |
| 2800 | 0.256900 | 0.430672 | 0.391909 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-kyrgyz --dataset mozilla-foundation/common_voice_8_0 --config ky --split test
```
|
reichenbach/wav2vec2-large-xls-r-300m-as
|
reichenbach
| 2022-03-24T11:58:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"as",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- as
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-as
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8318
- Wer: 0.5174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.882 | 25.0 | 400 | 1.2290 | 0.8182 |
| 0.8275 | 50.0 | 800 | 0.6835 | 0.5398 |
| 0.337 | 75.0 | 1200 | 0.7789 | 0.5107 |
| 0.2113 | 100.0 | 1600 | 0.8318 | 0.5174 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
### Test Evaluation
Common Voice Assamese Test Set (v7.0)
- WER: 0.7224
- CER: 0.2882
|
lgris/wav2vec2-xls-r-gn-cv7
|
lgris
| 2022-03-24T11:58:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gn",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-gn-cv7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Validation WER
type: wer
value: 73.02
- name: Validation CER
type: cer
value: 17.79
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: gn
metrics:
- name: Test WER
type: wer
value: 62.65
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-gn-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7197
- Wer: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.4669 | 6.24 | 100 | 3.3003 | 1.0 |
| 3.3214 | 12.48 | 200 | 3.2090 | 1.0 |
| 3.1619 | 18.73 | 300 | 2.6322 | 1.0 |
| 1.751 | 24.97 | 400 | 1.4089 | 0.9803 |
| 0.7997 | 31.24 | 500 | 0.9996 | 0.9211 |
| 0.4996 | 37.48 | 600 | 0.9879 | 0.8553 |
| 0.3677 | 43.73 | 700 | 0.9543 | 0.8289 |
| 0.2851 | 49.97 | 800 | 1.0627 | 0.8487 |
| 0.2556 | 56.24 | 900 | 1.0933 | 0.8355 |
| 0.2268 | 62.48 | 1000 | 0.9191 | 0.8026 |
| 0.1914 | 68.73 | 1100 | 0.9582 | 0.7961 |
| 0.1749 | 74.97 | 1200 | 1.0502 | 0.8092 |
| 0.157 | 81.24 | 1300 | 0.9998 | 0.7632 |
| 0.1505 | 87.48 | 1400 | 1.0076 | 0.7303 |
| 0.1278 | 93.73 | 1500 | 0.9321 | 0.75 |
| 0.1078 | 99.97 | 1600 | 1.0383 | 0.7697 |
| 0.1156 | 106.24 | 1700 | 1.0302 | 0.7763 |
| 0.1107 | 112.48 | 1800 | 1.0419 | 0.7763 |
| 0.091 | 118.73 | 1900 | 1.0694 | 0.75 |
| 0.0829 | 124.97 | 2000 | 1.0257 | 0.7829 |
| 0.0865 | 131.24 | 2100 | 1.2108 | 0.7368 |
| 0.0907 | 137.48 | 2200 | 1.0458 | 0.7697 |
| 0.0897 | 143.73 | 2300 | 1.1504 | 0.7895 |
| 0.0766 | 149.97 | 2400 | 1.1663 | 0.7237 |
| 0.0659 | 156.24 | 2500 | 1.1320 | 0.7632 |
| 0.0699 | 162.48 | 2600 | 1.2586 | 0.7434 |
| 0.0613 | 168.73 | 2700 | 1.1815 | 0.8158 |
| 0.0598 | 174.97 | 2800 | 1.3299 | 0.75 |
| 0.0577 | 181.24 | 2900 | 1.2035 | 0.7171 |
| 0.0576 | 187.48 | 3000 | 1.2134 | 0.7434 |
| 0.0518 | 193.73 | 3100 | 1.3406 | 0.7566 |
| 0.0524 | 199.97 | 3200 | 1.4251 | 0.75 |
| 0.0467 | 206.24 | 3300 | 1.3533 | 0.7697 |
| 0.0428 | 212.48 | 3400 | 1.2463 | 0.7368 |
| 0.0453 | 218.73 | 3500 | 1.4532 | 0.7566 |
| 0.0473 | 224.97 | 3600 | 1.3152 | 0.7434 |
| 0.0451 | 231.24 | 3700 | 1.2232 | 0.7368 |
| 0.0361 | 237.48 | 3800 | 1.2938 | 0.7171 |
| 0.045 | 243.73 | 3900 | 1.4148 | 0.7434 |
| 0.0422 | 249.97 | 4000 | 1.3786 | 0.7961 |
| 0.036 | 256.24 | 4100 | 1.4488 | 0.7697 |
| 0.0352 | 262.48 | 4200 | 1.2294 | 0.6776 |
| 0.0326 | 268.73 | 4300 | 1.2796 | 0.6974 |
| 0.034 | 274.97 | 4400 | 1.3805 | 0.7303 |
| 0.0305 | 281.24 | 4500 | 1.4994 | 0.7237 |
| 0.0325 | 287.48 | 4600 | 1.4330 | 0.6908 |
| 0.0338 | 293.73 | 4700 | 1.3091 | 0.7368 |
| 0.0306 | 299.97 | 4800 | 1.2174 | 0.7171 |
| 0.0299 | 306.24 | 4900 | 1.3527 | 0.7763 |
| 0.0287 | 312.48 | 5000 | 1.3651 | 0.7368 |
| 0.0274 | 318.73 | 5100 | 1.4337 | 0.7368 |
| 0.0258 | 324.97 | 5200 | 1.3831 | 0.6908 |
| 0.022 | 331.24 | 5300 | 1.3556 | 0.6974 |
| 0.021 | 337.48 | 5400 | 1.3836 | 0.7237 |
| 0.0241 | 343.73 | 5500 | 1.4352 | 0.7039 |
| 0.0229 | 349.97 | 5600 | 1.3904 | 0.7105 |
| 0.026 | 356.24 | 5700 | 1.4131 | 0.7171 |
| 0.021 | 362.48 | 5800 | 1.5426 | 0.6974 |
| 0.0191 | 368.73 | 5900 | 1.5960 | 0.7632 |
| 0.0227 | 374.97 | 6000 | 1.6240 | 0.7368 |
| 0.0204 | 381.24 | 6100 | 1.4301 | 0.7105 |
| 0.0175 | 387.48 | 6200 | 1.5554 | 0.75 |
| 0.0183 | 393.73 | 6300 | 1.6044 | 0.7697 |
| 0.0183 | 399.97 | 6400 | 1.5963 | 0.7368 |
| 0.016 | 406.24 | 6500 | 1.5679 | 0.7829 |
| 0.0178 | 412.48 | 6600 | 1.5928 | 0.7697 |
| 0.014 | 418.73 | 6700 | 1.7000 | 0.7632 |
| 0.0182 | 424.97 | 6800 | 1.5340 | 0.75 |
| 0.0148 | 431.24 | 6900 | 1.9274 | 0.7368 |
| 0.0148 | 437.48 | 7000 | 1.6437 | 0.7697 |
| 0.0173 | 443.73 | 7100 | 1.5468 | 0.75 |
| 0.0109 | 449.97 | 7200 | 1.6083 | 0.75 |
| 0.0167 | 456.24 | 7300 | 1.6732 | 0.75 |
| 0.0139 | 462.48 | 7400 | 1.5097 | 0.7237 |
| 0.013 | 468.73 | 7500 | 1.5947 | 0.7171 |
| 0.0128 | 474.97 | 7600 | 1.6260 | 0.7105 |
| 0.0166 | 481.24 | 7700 | 1.5756 | 0.7237 |
| 0.0127 | 487.48 | 7800 | 1.4506 | 0.6908 |
| 0.013 | 493.73 | 7900 | 1.4882 | 0.7368 |
| 0.0125 | 499.97 | 8000 | 1.5589 | 0.7829 |
| 0.0141 | 506.24 | 8100 | 1.6328 | 0.7434 |
| 0.0115 | 512.48 | 8200 | 1.6586 | 0.7434 |
| 0.0117 | 518.73 | 8300 | 1.6043 | 0.7105 |
| 0.009 | 524.97 | 8400 | 1.6508 | 0.7237 |
| 0.0108 | 531.24 | 8500 | 1.4507 | 0.6974 |
| 0.011 | 537.48 | 8600 | 1.5942 | 0.7434 |
| 0.009 | 543.73 | 8700 | 1.8121 | 0.7697 |
| 0.0112 | 549.97 | 8800 | 1.6923 | 0.7697 |
| 0.0073 | 556.24 | 8900 | 1.7096 | 0.7368 |
| 0.0098 | 562.48 | 9000 | 1.7052 | 0.7829 |
| 0.0088 | 568.73 | 9100 | 1.6956 | 0.7566 |
| 0.0099 | 574.97 | 9200 | 1.4909 | 0.7171 |
| 0.0075 | 581.24 | 9300 | 1.6307 | 0.7697 |
| 0.0077 | 587.48 | 9400 | 1.6196 | 0.7961 |
| 0.0088 | 593.73 | 9500 | 1.6119 | 0.7566 |
| 0.0085 | 599.97 | 9600 | 1.4512 | 0.7368 |
| 0.0086 | 606.24 | 9700 | 1.5992 | 0.7237 |
| 0.0109 | 612.48 | 9800 | 1.4706 | 0.7368 |
| 0.0098 | 618.73 | 9900 | 1.3824 | 0.7171 |
| 0.0091 | 624.97 | 10000 | 1.4776 | 0.6974 |
| 0.0072 | 631.24 | 10100 | 1.4896 | 0.7039 |
| 0.0087 | 637.48 | 10200 | 1.5467 | 0.7368 |
| 0.007 | 643.73 | 10300 | 1.5493 | 0.75 |
| 0.0076 | 649.97 | 10400 | 1.5706 | 0.7303 |
| 0.0085 | 656.24 | 10500 | 1.5748 | 0.7237 |
| 0.0075 | 662.48 | 10600 | 1.5081 | 0.7105 |
| 0.0068 | 668.73 | 10700 | 1.4967 | 0.6842 |
| 0.0117 | 674.97 | 10800 | 1.4986 | 0.7105 |
| 0.0054 | 681.24 | 10900 | 1.5587 | 0.7303 |
| 0.0059 | 687.48 | 11000 | 1.5886 | 0.7171 |
| 0.0071 | 693.73 | 11100 | 1.5746 | 0.7171 |
| 0.0048 | 699.97 | 11200 | 1.6166 | 0.7237 |
| 0.0048 | 706.24 | 11300 | 1.6098 | 0.7237 |
| 0.0056 | 712.48 | 11400 | 1.5834 | 0.7237 |
| 0.0048 | 718.73 | 11500 | 1.5653 | 0.7171 |
| 0.0045 | 724.97 | 11600 | 1.6252 | 0.7237 |
| 0.0068 | 731.24 | 11700 | 1.6794 | 0.7171 |
| 0.0044 | 737.48 | 11800 | 1.6881 | 0.7039 |
| 0.008 | 743.73 | 11900 | 1.7393 | 0.75 |
| 0.0045 | 749.97 | 12000 | 1.6869 | 0.7237 |
| 0.0047 | 756.24 | 12100 | 1.7105 | 0.7303 |
| 0.0057 | 762.48 | 12200 | 1.7439 | 0.7303 |
| 0.004 | 768.73 | 12300 | 1.7871 | 0.7434 |
| 0.0061 | 774.97 | 12400 | 1.7812 | 0.7303 |
| 0.005 | 781.24 | 12500 | 1.7410 | 0.7434 |
| 0.0056 | 787.48 | 12600 | 1.7220 | 0.7303 |
| 0.0064 | 793.73 | 12700 | 1.7141 | 0.7434 |
| 0.0042 | 799.97 | 12800 | 1.7139 | 0.7368 |
| 0.0049 | 806.24 | 12900 | 1.7211 | 0.7434 |
| 0.0044 | 812.48 | 13000 | 1.7197 | 0.7434 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-sakha
|
infinitejoy
| 2022-03-24T11:58:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"sah",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sah
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- sah
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Sakha
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sah
metrics:
- name: Test WER
type: wer
value: 44.196
- name: Test CER
type: cer
value: 10.271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sakha
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SAH dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4995
- Wer: 0.4421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8597 | 8.47 | 500 | 0.7731 | 0.7211 |
| 1.2508 | 16.95 | 1000 | 0.5368 | 0.5989 |
| 1.1066 | 25.42 | 1500 | 0.5034 | 0.5533 |
| 1.0064 | 33.9 | 2000 | 0.4686 | 0.5114 |
| 0.9324 | 42.37 | 2500 | 0.4927 | 0.5056 |
| 0.876 | 50.85 | 3000 | 0.4734 | 0.4795 |
| 0.8082 | 59.32 | 3500 | 0.4748 | 0.4799 |
| 0.7604 | 67.8 | 4000 | 0.4949 | 0.4691 |
| 0.7241 | 76.27 | 4500 | 0.5090 | 0.4627 |
| 0.6739 | 84.75 | 5000 | 0.4967 | 0.4452 |
| 0.6447 | 93.22 | 5500 | 0.5071 | 0.4437 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-lithuanian
|
infinitejoy
| 2022-03-24T11:58:06Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"lt",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- lt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- lt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Lithuanian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lt
metrics:
- name: Test WER
type: wer
value: 24.859
- name: Test CER
type: cer
value: 4.764
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-lithuanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - LT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Wer: 0.2486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6837 | 8.0 | 2000 | 0.6649 | 0.7515 |
| 1.1105 | 16.0 | 4000 | 0.2386 | 0.3436 |
| 1.0069 | 24.0 | 6000 | 0.2008 | 0.2968 |
| 0.9417 | 32.0 | 8000 | 0.1915 | 0.2774 |
| 0.887 | 40.0 | 10000 | 0.1819 | 0.2616 |
| 0.8563 | 48.0 | 12000 | 0.1729 | 0.2475 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-hausa
|
infinitejoy
| 2022-03-24T11:58:04Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ha",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ha
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ha
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Hausa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ha
metrics:
- name: Test WER
type: wer
value: 100
- name: Test CER
type: cer
value: 132.32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hausa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5756
- Wer: 0.6014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7064 | 11.36 | 500 | 2.7112 | 1.0 |
| 1.3079 | 22.73 | 1000 | 0.7337 | 0.7776 |
| 1.0919 | 34.09 | 1500 | 0.5938 | 0.7023 |
| 0.9546 | 45.45 | 2000 | 0.5698 | 0.6133 |
| 0.8895 | 56.82 | 2500 | 0.5739 | 0.6142 |
| 0.8152 | 68.18 | 3000 | 0.5579 | 0.6091 |
| 0.7703 | 79.55 | 3500 | 0.5813 | 0.6210 |
| 0.732 | 90.91 | 4000 | 0.5756 | 0.5860 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8
|
infinitejoy
| 2022-03-24T11:58:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"br",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- br
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: br
metrics:
- name: Test WER
type: wer
value: 54.855
- name: Test CER
type: cer
value: 17.865
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: NA
- Wer: NA
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
### Training results
NA
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset mozilla-foundation/common_voice_8_0 --config br --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8 --dataset speech-recognition-community-v2/dev_data --config br --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "br", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Eval results on Common Voice 7 "test" (WER):
NA
|
azuur/wav2vec2-base-gn-demo
|
azuur
| 2022-03-24T11:57:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hf-asr-leaderboard",
"gn",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- gn
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-base-gn-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-gn-demo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7426
- Wer: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 50
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.0 | 100 | 0.7045 | 0.7409 |
| No log | 8.0 | 200 | 0.7200 | 0.75 |
| No log | 12.0 | 300 | 0.7400 | 0.7439 |
| No log | 16.0 | 400 | 0.7677 | 0.7515 |
| 0.0846 | 20.0 | 500 | 0.7765 | 0.7271 |
| 0.0846 | 24.0 | 600 | 0.7821 | 0.7287 |
| 0.0846 | 28.0 | 700 | 0.7671 | 0.7180 |
| 0.0846 | 32.0 | 800 | 0.7594 | 0.7180 |
| 0.0846 | 36.0 | 900 | 0.7500 | 0.7165 |
| 0.0713 | 40.0 | 1000 | 0.7351 | 0.7287 |
| 0.0713 | 44.0 | 1100 | 0.7361 | 0.7241 |
| 0.0713 | 48.0 | 1200 | 0.7389 | 0.7378 |
| 0.0713 | 52.0 | 1300 | 0.7424 | 0.7210 |
| 0.0713 | 56.0 | 1400 | 0.7425 | 0.7256 |
| 0.0669 | 60.0 | 1500 | 0.7426 | 0.7256 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm
|
anuragshas
| 2022-03-24T11:57:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"lv",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- lv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Latvian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: lv
metrics:
- name: Test WER
type: wer
value: 9.633
- name: Test CER
type: cer
value: 2.614
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: lv
metrics:
- name: Test WER
type: wer
value: 36.11
- name: Test CER
type: cer
value: 14.244
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: lv
metrics:
- name: Test WER
type: wer
value: 44.12
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Latvian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - LV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.489 | 2.56 | 400 | 3.3590 | 1.0 |
| 2.9903 | 5.13 | 800 | 2.9704 | 1.0001 |
| 1.6712 | 7.69 | 1200 | 0.6179 | 0.6566 |
| 1.2635 | 10.26 | 1600 | 0.3176 | 0.4531 |
| 1.0819 | 12.82 | 2000 | 0.2517 | 0.3508 |
| 1.0136 | 15.38 | 2400 | 0.2257 | 0.3124 |
| 0.9625 | 17.95 | 2800 | 0.1975 | 0.2311 |
| 0.901 | 20.51 | 3200 | 0.1986 | 0.2097 |
| 0.8842 | 23.08 | 3600 | 0.1904 | 0.2039 |
| 0.8542 | 25.64 | 4000 | 0.1847 | 0.1981 |
| 0.8244 | 28.21 | 4400 | 0.1805 | 0.1847 |
| 0.7689 | 30.77 | 4800 | 0.1736 | 0.1832 |
| 0.7825 | 33.33 | 5200 | 0.1698 | 0.1821 |
| 0.7817 | 35.9 | 5600 | 0.1758 | 0.1803 |
| 0.7488 | 38.46 | 6000 | 0.1663 | 0.1760 |
| 0.7171 | 41.03 | 6400 | 0.1636 | 0.1721 |
| 0.7222 | 43.59 | 6800 | 0.1663 | 0.1729 |
| 0.7156 | 46.15 | 7200 | 0.1633 | 0.1715 |
| 0.7121 | 48.72 | 7600 | 0.1666 | 0.1718 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config lv --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config lv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "lv", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "domāju ka viņam viss labi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 16.997 | 9.633 |
|
anuragshas/wav2vec2-large-xls-r-300m-ur-cv8
|
anuragshas
| 2022-03-24T11:57:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ur
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-ur-cv8
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: ur
metrics:
- type: wer
value: 42.376
name: Test WER
- name: Test CER
type: cer
value: 18.18
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1443
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.6269 | 15.98 | 400 | 3.3246 | 1.0 |
| 3.0546 | 31.98 | 800 | 2.8148 | 0.9963 |
| 1.4589 | 47.98 | 1200 | 1.0237 | 0.6584 |
| 1.0911 | 63.98 | 1600 | 0.9524 | 0.5966 |
| 0.8879 | 79.98 | 2000 | 0.9827 | 0.5822 |
| 0.7467 | 95.98 | 2400 | 0.9923 | 0.5840 |
| 0.6427 | 111.98 | 2800 | 0.9988 | 0.5714 |
| 0.5685 | 127.98 | 3200 | 1.0872 | 0.5807 |
| 0.5068 | 143.98 | 3600 | 1.1194 | 0.5822 |
| 0.463 | 159.98 | 4000 | 1.1138 | 0.5692 |
| 0.4212 | 175.98 | 4400 | 1.1232 | 0.5714 |
| 0.4056 | 191.98 | 4800 | 1.1443 | 0.5677 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ur-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "اب نے ٹ پیس ان لیتے ہیں"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.146 | 42.376 |
|
anuragshas/wav2vec2-large-xls-r-300m-or
|
anuragshas
| 2022-03-24T11:57:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"or",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-or
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: or
metrics:
- type: wer
value: 47.186
name: Test WER
- name: Test CER
type: cer
value: 11.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6618
- Wer: 0.5166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.0493 | 23.53 | 400 | 2.9728 | 1.0 |
| 0.5306 | 47.06 | 800 | 1.2895 | 0.6138 |
| 0.1253 | 70.59 | 1200 | 1.6854 | 0.5703 |
| 0.0763 | 94.12 | 1600 | 1.9433 | 0.5870 |
| 0.0552 | 117.65 | 2000 | 1.4393 | 0.5575 |
| 0.0382 | 141.18 | 2400 | 1.4665 | 0.5537 |
| 0.0286 | 164.71 | 2800 | 1.5441 | 0.5320 |
| 0.0212 | 188.24 | 3200 | 1.6502 | 0.5115 |
| 0.0168 | 211.76 | 3600 | 1.6411 | 0.5332 |
| 0.0129 | 235.29 | 4000 | 1.6618 | 0.5166 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-or --dataset mozilla-foundation/common_voice_7_0 --config or --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-or"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "or", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ପରରାଏ ବାଲା ଗସ୍ତି ଫାଣ୍ଡି ଗୋପାଳ ପରଠାରୁ ଦେଢ଼କଶ ଦୂର"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.92 | 47.186 |
|
anuragshas/wav2vec2-large-xls-r-300m-ha-cv8
|
anuragshas
| 2022-03-24T11:57:39Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ha
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Hausa
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: ha
metrics:
- type: wer
value: 36.295
name: Test WER
- name: Test CER
type: cer
value: 11.073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Hausa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6094
- Wer: 0.5234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9599 | 6.56 | 400 | 2.8650 | 1.0 |
| 2.7357 | 13.11 | 800 | 2.7377 | 0.9951 |
| 1.3012 | 19.67 | 1200 | 0.6686 | 0.7111 |
| 1.0454 | 26.23 | 1600 | 0.5686 | 0.6137 |
| 0.9069 | 32.79 | 2000 | 0.5576 | 0.5815 |
| 0.82 | 39.34 | 2400 | 0.5502 | 0.5591 |
| 0.7413 | 45.9 | 2800 | 0.5970 | 0.5586 |
| 0.6872 | 52.46 | 3200 | 0.5817 | 0.5428 |
| 0.634 | 59.02 | 3600 | 0.5636 | 0.5314 |
| 0.6022 | 65.57 | 4000 | 0.5780 | 0.5229 |
| 0.5705 | 72.13 | 4400 | 0.6036 | 0.5323 |
| 0.5408 | 78.69 | 4800 | 0.6119 | 0.5336 |
| 0.5225 | 85.25 | 5200 | 0.6105 | 0.5270 |
| 0.5265 | 91.8 | 5600 | 0.6034 | 0.5231 |
| 0.5154 | 98.36 | 6000 | 0.6094 | 0.5234 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ha-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ha", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "kakin hade ya ke da kyautar"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 47.821 | 36.295 |
|
RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt
|
RuudVelo
| 2022-03-24T11:57:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mt",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-cv8-mt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mt
metrics:
- name: Test WER
type: wer
value: 17.57
- name: Test CER
type: cer
value: 3.86
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mt
metrics:
- name: Test WER
type: wer
value: null
- name: Test CER
type: cer
value: null
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-cv8-mt
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Wer: 0.1974
## Model description
Note: another version of this model is available with a KenLM 3gram model. This model performs better than this model. See https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-cv8-mt-lm
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following config and hyperparameters were used during training:
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-1b",
attention_dropout=0.05,
hidden_dropout=0.05,
feat_proj_dropout=0.05,
mask_time_prob=0.55,
mask_feature_prob=0.10,
layerdrop=0.05,
ctc_zero_infinity=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=50,
gradient_checkpointing=True,
fp16=True,
save_steps=400,
eval_steps=400,
logging_steps=400,
learning_rate=5.5e-05,
warmup_steps=500,
save_total_limit=2,
push_to_hub=True,
report_to="tensorboard")
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4564 | 13.33 | 400 | 0.3783 | 0.3981 |
| 0.7931 | 26.66 | 800 | 0.2377 | 0.2298 |
| 0.5364 | 39.98 | 1200 | 0.2210 | 0.1974 |
Note that the test WER of 19.74 is different than the above reported 17.57. This was due to a bug which was found while processing files with an older version of the datasets library. The right library is listed below.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal
|
NbAiLab
| 2022-03-24T11:57:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- automatic-speech-recognition
- NbAiLab/NPSC
- robust-speech-event
- false
- nb-NO
- hf-asr-leaderboard
datasets:
- NbAiLab/NPSC
language:
- nb-NO
model-index:
- name: wav2vec2-xls-r-1b-npsc-bokmaal
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NPSC
type: NbAiLab/NPSC
args: 16K_mp3_bokmaal
metrics:
- name: "Test (Bokm\xE5l) WER"
type: wer
value: 0.07901700231893541
- name: "Test (Bokm\xE5l) CER"
type: cer
value: 0.029734583252347752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-npsc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [NbAiLab/NPSC (16K_mp3_bokmaal)](https://huggingface.co/datasets/NbAiLab/NPSC/viewer/16K_mp3_bokmaal/train) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1598
- WER: 0.0966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.8361 | 0.32 | 500 | 0.6304 | 0.4970 |
| 0.5703 | 0.64 | 1000 | 0.3195 | 0.2775 |
| 0.5451 | 0.97 | 1500 | 0.2700 | 0.2246 |
| 0.47 | 1.29 | 2000 | 0.2564 | 0.2329 |
| 0.4063 | 1.61 | 2500 | 0.2459 | 0.2099 |
| 0.374 | 1.93 | 3000 | 0.2175 | 0.1894 |
| 0.3297 | 2.26 | 3500 | 0.2036 | 0.1755 |
| 0.3145 | 2.58 | 4000 | 0.1957 | 0.1757 |
| 0.3989 | 2.9 | 4500 | 0.1923 | 0.1723 |
| 0.271 | 3.22 | 5000 | 0.1889 | 0.1649 |
| 0.2758 | 3.55 | 5500 | 0.1768 | 0.1588 |
| 0.2683 | 3.87 | 6000 | 0.1720 | 0.1534 |
| 0.2341 | 4.19 | 6500 | 0.1689 | 0.1471 |
| 0.2316 | 4.51 | 7000 | 0.1706 | 0.1405 |
| 0.2383 | 4.84 | 7500 | 0.1637 | 0.1426 |
| 0.2148 | 5.16 | 8000 | 0.1584 | 0.1347 |
| 0.2085 | 5.48 | 8500 | 0.1601 | 0.1387 |
| 0.2944 | 5.8 | 9000 | 0.1566 | 0.1294 |
| 0.1944 | 6.13 | 9500 | 0.1494 | 0.1271 |
| 0.1853 | 6.45 | 10000 | 0.1561 | 0.1247 |
| 0.235 | 6.77 | 10500 | 0.1461 | 0.1215 |
| 0.2286 | 7.09 | 11000 | 0.1447 | 0.1167 |
| 0.1781 | 7.41 | 11500 | 0.1502 | 0.1199 |
| 0.1714 | 7.74 | 12000 | 0.1425 | 0.1179 |
| 0.1725 | 8.06 | 12500 | 0.1427 | 0.1173 |
| 0.143 | 8.38 | 13000 | 0.1448 | 0.1142 |
| 0.154 | 8.7 | 13500 | 0.1392 | 0.1104 |
| 0.1447 | 9.03 | 14000 | 0.1404 | 0.1094 |
| 0.1471 | 9.35 | 14500 | 0.1404 | 0.1088 |
| 0.1479 | 9.67 | 15000 | 0.1414 | 0.1133 |
| 0.1607 | 9.99 | 15500 | 0.1458 | 0.1171 |
| 0.166 | 10.32 | 16000 | 0.1652 | 0.1264 |
| 0.188 | 10.64 | 16500 | 0.1713 | 0.1322 |
| 0.1461 | 10.96 | 17000 | 0.1423 | 0.1111 |
| 0.1289 | 11.28 | 17500 | 0.1388 | 0.1097 |
| 0.1273 | 11.61 | 18000 | 0.1438 | 0.1074 |
| 0.1317 | 11.93 | 18500 | 0.1312 | 0.1066 |
| 0.1448 | 12.25 | 19000 | 0.1446 | 0.1042 |
| 0.1424 | 12.57 | 19500 | 0.1386 | 0.1015 |
| 0.1392 | 12.89 | 20000 | 0.1379 | 0.1005 |
| 0.1408 | 13.22 | 20500 | 0.1408 | 0.0992 |
| 0.1239 | 13.54 | 21000 | 0.1338 | 0.0968 |
| 0.1244 | 13.86 | 21500 | 0.1335 | 0.0957 |
| 0.1254 | 14.18 | 22000 | 0.1382 | 0.0950 |
| 0.1597 | 14.51 | 22500 | 0.1544 | 0.0970 |
| 0.1566 | 14.83 | 23000 | 0.1589 | 0.0963 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
NbAiLab/wav2vec2-large-voxrex-npsc-bokmaal
|
NbAiLab
| 2022-03-24T11:57:23Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- automatic-speech-recognition
- NbAiLab/NPSC
- robust-speech-event
- false
- nb-NO
- hf-asr-leaderboard
datasets:
- NbAiLab/NPSC
language:
- nb-NO
model-index:
- name: wav2vec2-large-voxrex-npsc-bokmaal
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NPSC
type: NbAiLab/NPSC
args: 16K_mp3_bokmaal
metrics:
- name: "Test (Bokm\xE5l) WER"
type: wer
value: 0.07028972259374369
- name: "Test (Bokm\xE5l) CER"
type: cer
value: 0.026870600821650645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-bokmaal
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1311
- Wer: 0.1038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.379967082059723e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2127 | 0.32 | 500 | 0.1335 | 0.1047 |
| 0.1976 | 0.64 | 1000 | 0.1309 | 0.1039 |
| 0.1887 | 0.97 | 1500 | 0.1306 | 0.1040 |
| 0.18 | 1.29 | 2000 | 0.1311 | 0.1038 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1
|
DrishtiSharma
| 2022-03-24T11:57:12Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"rm-vallader",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- rm-vallader
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- rm-vallader
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-rm-vallader-d1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-vallader
metrics:
- name: Test WER
type: wer
value: 0.26472007722007723
- name: Test CER
type: cer
value: 0.05860608074430969
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Wer: 0.2831
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Romansh-Vallader language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.927 | 15.15 | 500 | 2.9196 | 1.0 |
| 1.3835 | 30.3 | 1000 | 0.5879 | 0.5866 |
| 0.7415 | 45.45 | 1500 | 0.3077 | 0.3316 |
| 0.5575 | 60.61 | 2000 | 0.2735 | 0.2954 |
| 0.4581 | 75.76 | 2500 | 0.2707 | 0.2802 |
| 0.3977 | 90.91 | 3000 | 0.2785 | 0.2809 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
|
DrishtiSharma
| 2022-03-24T11:57:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mt",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-mt-o1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mt
metrics:
- name: Test WER
type: wer
value: 0.2378369069146646
- name: Test CER
type: cer
value: 0.050364163712536256
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mt
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1987
- Wer: 0.1920
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Maltese language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1721 | 18.02 | 2000 | 0.3831 | 0.4066 |
| 0.7849 | 36.04 | 4000 | 0.2191 | 0.2417 |
| 0.6723 | 54.05 | 6000 | 0.2056 | 0.2134 |
| 0.6015 | 72.07 | 8000 | 0.2008 | 0.2031 |
| 0.5386 | 90.09 | 10000 | 0.1967 | 0.1953 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2
|
DrishtiSharma
| 2022-03-24T11:57:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"vot",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- vot
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- vot
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-vot-final-a2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: vot
metrics:
- name: Test WER
type: wer
value: 0.8333333333333334
- name: Test CER
type: cer
value: 0.48672566371681414
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vot-final-a2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - VOT dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8745
- Wer: 0.8333
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common_voice_8_0 --config vot --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Votic language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 340
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1216 | 33.33 | 100 | 4.2848 | 1.0 |
| 2.9982 | 66.67 | 200 | 2.8665 | 1.0 |
| 1.5476 | 100.0 | 300 | 2.3022 | 0.8889 |
| 0.2776 | 133.33 | 400 | 2.7480 | 0.8889 |
| 0.1136 | 166.67 | 500 | 2.5383 | 0.8889 |
| 0.0489 | 200.0 | 600 | 2.8745 | 0.8333 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3
|
DrishtiSharma
| 2022-03-24T11:56:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sat",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- sat
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- sat
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-sat-a3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sat
metrics:
- name: Test WER
type: wer
value: 0.357429718875502
- name: Test CER
type: cer
value: 0.14203730272596843
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sat
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-a3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8961
- Wer: 0.3976
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1266 | 33.29 | 100 | 2.8577 | 1.0 |
| 2.1549 | 66.57 | 200 | 1.0799 | 0.5542 |
| 0.5628 | 99.86 | 300 | 0.7973 | 0.4016 |
| 0.0779 | 133.29 | 400 | 0.8424 | 0.4177 |
| 0.0404 | 166.57 | 500 | 0.9048 | 0.4137 |
| 0.0212 | 199.86 | 600 | 0.8961 | 0.3976 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
|
DrishtiSharma
| 2022-03-24T11:56:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"myv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- myv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- myv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-myv-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: myv
metrics:
- name: Test WER
type: wer
value: 0.599548532731377
- name: Test CER
type: cer
value: 0.12953851902597
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: myv
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-myv-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
- Wer: 0.6160
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Erzya language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 19.453 | 1.92 | 50 | 16.4001 | 1.0 |
| 9.6875 | 3.85 | 100 | 5.4468 | 1.0 |
| 4.9988 | 5.77 | 150 | 4.3507 | 1.0 |
| 4.1148 | 7.69 | 200 | 3.6753 | 1.0 |
| 3.4922 | 9.62 | 250 | 3.3103 | 1.0 |
| 3.2443 | 11.54 | 300 | 3.1741 | 1.0 |
| 3.164 | 13.46 | 350 | 3.1346 | 1.0 |
| 3.0954 | 15.38 | 400 | 3.0428 | 1.0 |
| 3.0076 | 17.31 | 450 | 2.9137 | 1.0 |
| 2.6883 | 19.23 | 500 | 2.1476 | 0.9978 |
| 1.5124 | 21.15 | 550 | 0.8955 | 0.8225 |
| 0.8711 | 23.08 | 600 | 0.6948 | 0.7591 |
| 0.6695 | 25.0 | 650 | 0.6683 | 0.7636 |
| 0.5606 | 26.92 | 700 | 0.6821 | 0.7435 |
| 0.503 | 28.85 | 750 | 0.7220 | 0.7516 |
| 0.4528 | 30.77 | 800 | 0.6638 | 0.7324 |
| 0.4219 | 32.69 | 850 | 0.7120 | 0.7435 |
| 0.4109 | 34.62 | 900 | 0.7122 | 0.7511 |
| 0.3887 | 36.54 | 950 | 0.7179 | 0.7199 |
| 0.3895 | 38.46 | 1000 | 0.7322 | 0.7525 |
| 0.391 | 40.38 | 1050 | 0.6850 | 0.7364 |
| 0.3537 | 42.31 | 1100 | 0.7571 | 0.7279 |
| 0.3267 | 44.23 | 1150 | 0.7575 | 0.7257 |
| 0.3195 | 46.15 | 1200 | 0.7580 | 0.6998 |
| 0.2891 | 48.08 | 1250 | 0.7452 | 0.7101 |
| 0.294 | 50.0 | 1300 | 0.7316 | 0.6945 |
| 0.2854 | 51.92 | 1350 | 0.7241 | 0.6757 |
| 0.2801 | 53.85 | 1400 | 0.7532 | 0.6887 |
| 0.2502 | 55.77 | 1450 | 0.7587 | 0.6811 |
| 0.2427 | 57.69 | 1500 | 0.7231 | 0.6851 |
| 0.2311 | 59.62 | 1550 | 0.7288 | 0.6632 |
| 0.2176 | 61.54 | 1600 | 0.7711 | 0.6664 |
| 0.2117 | 63.46 | 1650 | 0.7914 | 0.6940 |
| 0.2114 | 65.38 | 1700 | 0.8065 | 0.6918 |
| 0.1913 | 67.31 | 1750 | 0.8372 | 0.6945 |
| 0.1897 | 69.23 | 1800 | 0.8051 | 0.6869 |
| 0.1865 | 71.15 | 1850 | 0.8076 | 0.6740 |
| 0.1844 | 73.08 | 1900 | 0.7935 | 0.6708 |
| 0.1757 | 75.0 | 1950 | 0.8015 | 0.6610 |
| 0.1636 | 76.92 | 2000 | 0.7614 | 0.6414 |
| 0.1637 | 78.85 | 2050 | 0.8123 | 0.6592 |
| 0.1599 | 80.77 | 2100 | 0.7907 | 0.6566 |
| 0.1498 | 82.69 | 2150 | 0.8641 | 0.6757 |
| 0.1545 | 84.62 | 2200 | 0.7438 | 0.6682 |
| 0.1433 | 86.54 | 2250 | 0.8014 | 0.6624 |
| 0.1427 | 88.46 | 2300 | 0.7758 | 0.6646 |
| 0.1423 | 90.38 | 2350 | 0.7741 | 0.6423 |
| 0.1298 | 92.31 | 2400 | 0.7938 | 0.6414 |
| 0.1111 | 94.23 | 2450 | 0.7976 | 0.6467 |
| 0.1243 | 96.15 | 2500 | 0.7916 | 0.6481 |
| 0.1215 | 98.08 | 2550 | 0.7594 | 0.6392 |
| 0.113 | 100.0 | 2600 | 0.8236 | 0.6392 |
| 0.1077 | 101.92 | 2650 | 0.7959 | 0.6347 |
| 0.0988 | 103.85 | 2700 | 0.8189 | 0.6392 |
| 0.0953 | 105.77 | 2750 | 0.8157 | 0.6414 |
| 0.0889 | 107.69 | 2800 | 0.7946 | 0.6369 |
| 0.0929 | 109.62 | 2850 | 0.8255 | 0.6360 |
| 0.0822 | 111.54 | 2900 | 0.8320 | 0.6334 |
| 0.086 | 113.46 | 2950 | 0.8539 | 0.6490 |
| 0.0825 | 115.38 | 3000 | 0.8438 | 0.6418 |
| 0.0727 | 117.31 | 3050 | 0.8568 | 0.6481 |
| 0.0717 | 119.23 | 3100 | 0.8447 | 0.6512 |
| 0.0815 | 121.15 | 3150 | 0.8470 | 0.6445 |
| 0.0689 | 123.08 | 3200 | 0.8264 | 0.6249 |
| 0.0726 | 125.0 | 3250 | 0.7981 | 0.6169 |
| 0.0648 | 126.92 | 3300 | 0.8237 | 0.6200 |
| 0.0632 | 128.85 | 3350 | 0.8416 | 0.6249 |
| 0.06 | 130.77 | 3400 | 0.8276 | 0.6173 |
| 0.0616 | 132.69 | 3450 | 0.8429 | 0.6209 |
| 0.0614 | 134.62 | 3500 | 0.8485 | 0.6271 |
| 0.0539 | 136.54 | 3550 | 0.8598 | 0.6218 |
| 0.0555 | 138.46 | 3600 | 0.8557 | 0.6169 |
| 0.0604 | 140.38 | 3650 | 0.8436 | 0.6186 |
| 0.0556 | 142.31 | 3700 | 0.8428 | 0.6178 |
| 0.051 | 144.23 | 3750 | 0.8440 | 0.6142 |
| 0.0526 | 146.15 | 3800 | 0.8566 | 0.6142 |
| 0.052 | 148.08 | 3850 | 0.8544 | 0.6178 |
| 0.0519 | 150.0 | 3900 | 0.8537 | 0.6160 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3
|
DrishtiSharma
| 2022-03-24T11:56:50Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hsb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hsb
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-v3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 0.4763681592039801
- name: Test CER
type: cer
value: 0.11194945177476305
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hsb
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Wer: 0.4827
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8951 | 3.23 | 100 | 3.6396 | 1.0 |
| 3.314 | 6.45 | 200 | 3.2331 | 1.0 |
| 3.1931 | 9.68 | 300 | 3.0947 | 0.9906 |
| 1.7079 | 12.9 | 400 | 0.8865 | 0.8499 |
| 0.6859 | 16.13 | 500 | 0.7994 | 0.7529 |
| 0.4804 | 19.35 | 600 | 0.7783 | 0.7069 |
| 0.3506 | 22.58 | 700 | 0.6904 | 0.6321 |
| 0.2695 | 25.81 | 800 | 0.6519 | 0.5926 |
| 0.222 | 29.03 | 900 | 0.7041 | 0.5720 |
| 0.1828 | 32.26 | 1000 | 0.6608 | 0.5513 |
| 0.1474 | 35.48 | 1100 | 0.7129 | 0.5319 |
| 0.1269 | 38.71 | 1200 | 0.6664 | 0.5056 |
| 0.1077 | 41.94 | 1300 | 0.6712 | 0.4942 |
| 0.0934 | 45.16 | 1400 | 0.6467 | 0.4879 |
| 0.0819 | 48.39 | 1500 | 0.6549 | 0.4827 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1
|
DrishtiSharma
| 2022-03-24T11:56:45Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hsb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hsb
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 0.4393
- name: Test CER
type: cer
value: 0.1036
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hsb
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Wer: 0.4402
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.972 | 3.23 | 100 | 3.7498 | 1.0 |
| 3.3401 | 6.45 | 200 | 3.2320 | 1.0 |
| 3.2046 | 9.68 | 300 | 3.1741 | 0.9806 |
| 2.4031 | 12.9 | 400 | 1.0579 | 0.8996 |
| 1.0427 | 16.13 | 500 | 0.7989 | 0.7557 |
| 0.741 | 19.35 | 600 | 0.6405 | 0.6299 |
| 0.5699 | 22.58 | 700 | 0.6129 | 0.5928 |
| 0.4607 | 25.81 | 800 | 0.6548 | 0.5695 |
| 0.3827 | 29.03 | 900 | 0.6268 | 0.5190 |
| 0.3282 | 32.26 | 1000 | 0.5919 | 0.5016 |
| 0.2764 | 35.48 | 1100 | 0.5953 | 0.4805 |
| 0.2335 | 38.71 | 1200 | 0.5717 | 0.4728 |
| 0.2106 | 41.94 | 1300 | 0.5674 | 0.4569 |
| 0.1859 | 45.16 | 1400 | 0.5685 | 0.4502 |
| 0.1592 | 48.39 | 1500 | 0.5684 | 0.4402 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1
|
DrishtiSharma
| 2022-03-24T11:56:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"as",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- as
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- as
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-as-g1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: as
metrics:
- name: Test WER
type: wer
value: 0.6540934419202743
- name: Test CER
type: cer
value: 0.21454042646095625
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: as
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-g1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3327
- Wer: 0.5744
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 14.1958 | 5.26 | 100 | 7.1919 | 1.0 |
| 5.0035 | 10.51 | 200 | 3.9362 | 1.0 |
| 3.6193 | 15.77 | 300 | 3.4451 | 1.0 |
| 3.4852 | 21.05 | 400 | 3.3536 | 1.0 |
| 2.8489 | 26.31 | 500 | 1.6451 | 0.9100 |
| 0.9568 | 31.56 | 600 | 1.0514 | 0.7561 |
| 0.4865 | 36.82 | 700 | 1.0434 | 0.7184 |
| 0.322 | 42.1 | 800 | 1.0825 | 0.7210 |
| 0.2383 | 47.36 | 900 | 1.1304 | 0.6897 |
| 0.2136 | 52.62 | 1000 | 1.1150 | 0.6854 |
| 0.179 | 57.87 | 1100 | 1.2453 | 0.6875 |
| 0.1539 | 63.15 | 1200 | 1.2211 | 0.6704 |
| 0.1303 | 68.41 | 1300 | 1.2859 | 0.6747 |
| 0.1183 | 73.67 | 1400 | 1.2775 | 0.6721 |
| 0.0994 | 78.92 | 1500 | 1.2321 | 0.6404 |
| 0.0991 | 84.21 | 1600 | 1.2766 | 0.6524 |
| 0.0887 | 89.46 | 1700 | 1.3026 | 0.6344 |
| 0.0754 | 94.72 | 1800 | 1.3199 | 0.6704 |
| 0.0693 | 99.97 | 1900 | 1.3044 | 0.6361 |
| 0.0568 | 105.26 | 2000 | 1.3541 | 0.6254 |
| 0.0536 | 110.51 | 2100 | 1.3320 | 0.6249 |
| 0.0529 | 115.77 | 2200 | 1.3370 | 0.6271 |
| 0.048 | 121.05 | 2300 | 1.2757 | 0.6031 |
| 0.0419 | 126.31 | 2400 | 1.2661 | 0.6172 |
| 0.0349 | 131.56 | 2500 | 1.2897 | 0.6048 |
| 0.0309 | 136.82 | 2600 | 1.2688 | 0.5962 |
| 0.0278 | 142.1 | 2700 | 1.2885 | 0.5954 |
| 0.0254 | 147.36 | 2800 | 1.2988 | 0.5915 |
| 0.0223 | 152.62 | 2900 | 1.3153 | 0.5941 |
| 0.0216 | 157.87 | 3000 | 1.2936 | 0.5937 |
| 0.0186 | 163.15 | 3100 | 1.2906 | 0.5877 |
| 0.0156 | 168.41 | 3200 | 1.3476 | 0.5962 |
| 0.0158 | 173.67 | 3300 | 1.3363 | 0.5847 |
| 0.0142 | 178.92 | 3400 | 1.3367 | 0.5847 |
| 0.0153 | 184.21 | 3500 | 1.3105 | 0.5757 |
| 0.0119 | 189.46 | 3600 | 1.3255 | 0.5705 |
| 0.0115 | 194.72 | 3700 | 1.3340 | 0.5787 |
| 0.0103 | 199.97 | 3800 | 1.3327 | 0.5744 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
|
AndrewMcDowell
| 2022-03-24T11:56:32Z | 21 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"ja",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- ja
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER
type: wer
value: 95.33
- name: Test CER
type: cer
value: 22.27
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Test WER
type: wer
value: 100.0
- name: Test CER
type: cer
value: 30.33
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test CER
type: cer
value: 29.63
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 32.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 1.0132
- Cer: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 |
| 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 |
| 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
sammy786/wav2vec2-xlsr-interlingua
|
sammy786
| 2022-03-24T11:56:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ia",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ia
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ia
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-interlingua
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ia
metrics:
- name: Test WER
type: wer
value: 16.81
- name: Test CER
type: cer
value: 4.76
---
# sammy786/wav2vec2-xlsr-interlingua
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ia dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 5.44
- Wer: 19.78
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.649200 | 0.483339 | 0.511322 |
| 400 | 0.764700 | 0.133428 | 0.251288 |
| 600 | 0.563700 | 0.099292 | 0.227745 |
| 800 | 0.438800 | 0.087545 | 0.217445 |
| 1000 | 0.406800 | 0.072313 | 0.213848 |
| 1200 | 0.237500 | 0.066965 | 0.213766 |
| 1400 | 0.177800 | 0.064419 | 0.208126 |
| 1600 | 0.157100 | 0.065962 | 0.214011 |
| 1800 | 0.146600 | 0.059477 | 0.202076 |
| 2000 | 0.132800 | 0.055015 | 0.201831 |
| 2200 | 0.122000 | 0.055421 | 0.201749 |
| 2400 | 0.115700 | 0.054462 | 0.197826 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-interlingua --dataset mozilla-foundation/common_voice_8_0 --config ia --split test
```
|
sammy786/wav2vec2-xlsr-georgian
|
sammy786
| 2022-03-24T11:56:11Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ka",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ka
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ka
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-czech
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ka
metrics:
- name: Test WER
type: wer
value: 23.9
- name: Test CER
type: cer
value: 3.59
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ka
metrics:
- name: Test WER
type: wer
value: 75.07
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ka
metrics:
- name: Test WER
type: wer
value: 74.41
---
# sammy786/wav2vec2-xlsr-georgian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ka dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 10.54
- Wer: 27.53
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.152100 | 0.823672 | 0.967814 |
| 400 | 0.889500 | 0.196740 | 0.444792 |
| 600 | 0.493700 | 0.155659 | 0.366115 |
| 800 | 0.328000 | 0.138066 | 0.358069 |
| 1000 | 0.260600 | 0.119236 | 0.324989 |
| 1200 | 0.217200 | 0.114050 | 0.313366 |
| 1400 | 0.188800 | 0.112600 | 0.302190 |
| 1600 | 0.166900 | 0.111154 | 0.295485 |
| 1800 | 0.155500 | 0.109963 | 0.286544 |
| 2000 | 0.140400 | 0.107587 | 0.277604 |
| 2200 | 0.142600 | 0.105662 | 0.277157 |
| 2400 | 0.135400 | 0.105414 | 0.275369 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-georgian --dataset mozilla-foundation/common_voice_8_0 --config ka --split test
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.