modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 12:28:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 12:27:35
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
|
DrishtiSharma
| 2022-03-24T11:56:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"myv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- myv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- myv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-myv-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: myv
metrics:
- name: Test WER
type: wer
value: 0.599548532731377
- name: Test CER
type: cer
value: 0.12953851902597
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: myv
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-myv-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
- Wer: 0.6160
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Erzya language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 19.453 | 1.92 | 50 | 16.4001 | 1.0 |
| 9.6875 | 3.85 | 100 | 5.4468 | 1.0 |
| 4.9988 | 5.77 | 150 | 4.3507 | 1.0 |
| 4.1148 | 7.69 | 200 | 3.6753 | 1.0 |
| 3.4922 | 9.62 | 250 | 3.3103 | 1.0 |
| 3.2443 | 11.54 | 300 | 3.1741 | 1.0 |
| 3.164 | 13.46 | 350 | 3.1346 | 1.0 |
| 3.0954 | 15.38 | 400 | 3.0428 | 1.0 |
| 3.0076 | 17.31 | 450 | 2.9137 | 1.0 |
| 2.6883 | 19.23 | 500 | 2.1476 | 0.9978 |
| 1.5124 | 21.15 | 550 | 0.8955 | 0.8225 |
| 0.8711 | 23.08 | 600 | 0.6948 | 0.7591 |
| 0.6695 | 25.0 | 650 | 0.6683 | 0.7636 |
| 0.5606 | 26.92 | 700 | 0.6821 | 0.7435 |
| 0.503 | 28.85 | 750 | 0.7220 | 0.7516 |
| 0.4528 | 30.77 | 800 | 0.6638 | 0.7324 |
| 0.4219 | 32.69 | 850 | 0.7120 | 0.7435 |
| 0.4109 | 34.62 | 900 | 0.7122 | 0.7511 |
| 0.3887 | 36.54 | 950 | 0.7179 | 0.7199 |
| 0.3895 | 38.46 | 1000 | 0.7322 | 0.7525 |
| 0.391 | 40.38 | 1050 | 0.6850 | 0.7364 |
| 0.3537 | 42.31 | 1100 | 0.7571 | 0.7279 |
| 0.3267 | 44.23 | 1150 | 0.7575 | 0.7257 |
| 0.3195 | 46.15 | 1200 | 0.7580 | 0.6998 |
| 0.2891 | 48.08 | 1250 | 0.7452 | 0.7101 |
| 0.294 | 50.0 | 1300 | 0.7316 | 0.6945 |
| 0.2854 | 51.92 | 1350 | 0.7241 | 0.6757 |
| 0.2801 | 53.85 | 1400 | 0.7532 | 0.6887 |
| 0.2502 | 55.77 | 1450 | 0.7587 | 0.6811 |
| 0.2427 | 57.69 | 1500 | 0.7231 | 0.6851 |
| 0.2311 | 59.62 | 1550 | 0.7288 | 0.6632 |
| 0.2176 | 61.54 | 1600 | 0.7711 | 0.6664 |
| 0.2117 | 63.46 | 1650 | 0.7914 | 0.6940 |
| 0.2114 | 65.38 | 1700 | 0.8065 | 0.6918 |
| 0.1913 | 67.31 | 1750 | 0.8372 | 0.6945 |
| 0.1897 | 69.23 | 1800 | 0.8051 | 0.6869 |
| 0.1865 | 71.15 | 1850 | 0.8076 | 0.6740 |
| 0.1844 | 73.08 | 1900 | 0.7935 | 0.6708 |
| 0.1757 | 75.0 | 1950 | 0.8015 | 0.6610 |
| 0.1636 | 76.92 | 2000 | 0.7614 | 0.6414 |
| 0.1637 | 78.85 | 2050 | 0.8123 | 0.6592 |
| 0.1599 | 80.77 | 2100 | 0.7907 | 0.6566 |
| 0.1498 | 82.69 | 2150 | 0.8641 | 0.6757 |
| 0.1545 | 84.62 | 2200 | 0.7438 | 0.6682 |
| 0.1433 | 86.54 | 2250 | 0.8014 | 0.6624 |
| 0.1427 | 88.46 | 2300 | 0.7758 | 0.6646 |
| 0.1423 | 90.38 | 2350 | 0.7741 | 0.6423 |
| 0.1298 | 92.31 | 2400 | 0.7938 | 0.6414 |
| 0.1111 | 94.23 | 2450 | 0.7976 | 0.6467 |
| 0.1243 | 96.15 | 2500 | 0.7916 | 0.6481 |
| 0.1215 | 98.08 | 2550 | 0.7594 | 0.6392 |
| 0.113 | 100.0 | 2600 | 0.8236 | 0.6392 |
| 0.1077 | 101.92 | 2650 | 0.7959 | 0.6347 |
| 0.0988 | 103.85 | 2700 | 0.8189 | 0.6392 |
| 0.0953 | 105.77 | 2750 | 0.8157 | 0.6414 |
| 0.0889 | 107.69 | 2800 | 0.7946 | 0.6369 |
| 0.0929 | 109.62 | 2850 | 0.8255 | 0.6360 |
| 0.0822 | 111.54 | 2900 | 0.8320 | 0.6334 |
| 0.086 | 113.46 | 2950 | 0.8539 | 0.6490 |
| 0.0825 | 115.38 | 3000 | 0.8438 | 0.6418 |
| 0.0727 | 117.31 | 3050 | 0.8568 | 0.6481 |
| 0.0717 | 119.23 | 3100 | 0.8447 | 0.6512 |
| 0.0815 | 121.15 | 3150 | 0.8470 | 0.6445 |
| 0.0689 | 123.08 | 3200 | 0.8264 | 0.6249 |
| 0.0726 | 125.0 | 3250 | 0.7981 | 0.6169 |
| 0.0648 | 126.92 | 3300 | 0.8237 | 0.6200 |
| 0.0632 | 128.85 | 3350 | 0.8416 | 0.6249 |
| 0.06 | 130.77 | 3400 | 0.8276 | 0.6173 |
| 0.0616 | 132.69 | 3450 | 0.8429 | 0.6209 |
| 0.0614 | 134.62 | 3500 | 0.8485 | 0.6271 |
| 0.0539 | 136.54 | 3550 | 0.8598 | 0.6218 |
| 0.0555 | 138.46 | 3600 | 0.8557 | 0.6169 |
| 0.0604 | 140.38 | 3650 | 0.8436 | 0.6186 |
| 0.0556 | 142.31 | 3700 | 0.8428 | 0.6178 |
| 0.051 | 144.23 | 3750 | 0.8440 | 0.6142 |
| 0.0526 | 146.15 | 3800 | 0.8566 | 0.6142 |
| 0.052 | 148.08 | 3850 | 0.8544 | 0.6178 |
| 0.0519 | 150.0 | 3900 | 0.8537 | 0.6160 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2
|
DrishtiSharma
| 2022-03-24T11:56:48Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hsb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hsb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hsb
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 0.4654228855721393
- name: Test CER
type: cer
value: 0.11351049990708047
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hsb
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5328
- Wer: 0.4596
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.5979 | 3.23 | 100 | 3.5602 | 1.0 |
| 3.303 | 6.45 | 200 | 3.2238 | 1.0 |
| 3.2034 | 9.68 | 300 | 3.2002 | 0.9888 |
| 2.7986 | 12.9 | 400 | 1.2408 | 0.9210 |
| 1.3869 | 16.13 | 500 | 0.7973 | 0.7462 |
| 1.0228 | 19.35 | 600 | 0.6722 | 0.6788 |
| 0.8311 | 22.58 | 700 | 0.6100 | 0.6150 |
| 0.717 | 25.81 | 800 | 0.6236 | 0.6013 |
| 0.6264 | 29.03 | 900 | 0.6031 | 0.5575 |
| 0.5494 | 32.26 | 1000 | 0.5656 | 0.5309 |
| 0.4781 | 35.48 | 1100 | 0.5289 | 0.4996 |
| 0.4311 | 38.71 | 1200 | 0.5375 | 0.4768 |
| 0.3902 | 41.94 | 1300 | 0.5246 | 0.4703 |
| 0.3508 | 45.16 | 1400 | 0.5382 | 0.4696 |
| 0.3199 | 48.39 | 1500 | 0.5328 | 0.4596 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10
|
DrishtiSharma
| 2022-03-24T11:56:43Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- br
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-br-d10
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: br
metrics:
- type: wer
value: 0.5230357484228637
name: Test WER
- name: Test CER
type: cer
value: 0.1880661144228536
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: br
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d10
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1382
- Wer: 0.4895
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13.611 | 0.68 | 100 | 5.8492 | 1.0 |
| 3.8176 | 1.35 | 200 | 3.2181 | 1.0 |
| 3.0457 | 2.03 | 300 | 3.0902 | 1.0 |
| 2.2632 | 2.7 | 400 | 1.4882 | 0.9426 |
| 1.1965 | 3.38 | 500 | 1.1396 | 0.7950 |
| 0.984 | 4.05 | 600 | 1.0216 | 0.7583 |
| 0.8036 | 4.73 | 700 | 1.0258 | 0.7202 |
| 0.7061 | 5.41 | 800 | 0.9710 | 0.6820 |
| 0.689 | 6.08 | 900 | 0.9731 | 0.6488 |
| 0.6063 | 6.76 | 1000 | 0.9442 | 0.6569 |
| 0.5215 | 7.43 | 1100 | 1.0221 | 0.6671 |
| 0.4965 | 8.11 | 1200 | 0.9266 | 0.6181 |
| 0.4321 | 8.78 | 1300 | 0.9050 | 0.5991 |
| 0.3762 | 9.46 | 1400 | 0.9801 | 0.6134 |
| 0.3747 | 10.14 | 1500 | 0.9210 | 0.5747 |
| 0.3554 | 10.81 | 1600 | 0.9720 | 0.6051 |
| 0.3148 | 11.49 | 1700 | 0.9672 | 0.6099 |
| 0.3176 | 12.16 | 1800 | 1.0120 | 0.5966 |
| 0.2915 | 12.84 | 1900 | 0.9490 | 0.5653 |
| 0.2696 | 13.51 | 2000 | 0.9394 | 0.5819 |
| 0.2569 | 14.19 | 2100 | 1.0197 | 0.5667 |
| 0.2395 | 14.86 | 2200 | 0.9771 | 0.5608 |
| 0.2367 | 15.54 | 2300 | 1.0516 | 0.5678 |
| 0.2153 | 16.22 | 2400 | 1.0097 | 0.5679 |
| 0.2092 | 16.89 | 2500 | 1.0143 | 0.5430 |
| 0.2046 | 17.57 | 2600 | 1.0884 | 0.5631 |
| 0.1937 | 18.24 | 2700 | 1.0113 | 0.5648 |
| 0.1752 | 18.92 | 2800 | 1.0056 | 0.5470 |
| 0.164 | 19.59 | 2900 | 1.0340 | 0.5508 |
| 0.1723 | 20.27 | 3000 | 1.0743 | 0.5615 |
| 0.1535 | 20.95 | 3100 | 1.0495 | 0.5465 |
| 0.1432 | 21.62 | 3200 | 1.0390 | 0.5333 |
| 0.1561 | 22.3 | 3300 | 1.0798 | 0.5590 |
| 0.1384 | 22.97 | 3400 | 1.1716 | 0.5449 |
| 0.1359 | 23.65 | 3500 | 1.1154 | 0.5420 |
| 0.1356 | 24.32 | 3600 | 1.0883 | 0.5387 |
| 0.1355 | 25.0 | 3700 | 1.1114 | 0.5504 |
| 0.1158 | 25.68 | 3800 | 1.1171 | 0.5388 |
| 0.1166 | 26.35 | 3900 | 1.1335 | 0.5403 |
| 0.1165 | 27.03 | 4000 | 1.1374 | 0.5248 |
| 0.1064 | 27.7 | 4100 | 1.0336 | 0.5298 |
| 0.0987 | 28.38 | 4200 | 1.0407 | 0.5216 |
| 0.104 | 29.05 | 4300 | 1.1012 | 0.5350 |
| 0.0894 | 29.73 | 4400 | 1.1016 | 0.5310 |
| 0.0912 | 30.41 | 4500 | 1.1383 | 0.5302 |
| 0.0972 | 31.08 | 4600 | 1.0851 | 0.5214 |
| 0.0832 | 31.76 | 4700 | 1.1705 | 0.5311 |
| 0.0859 | 32.43 | 4800 | 1.0750 | 0.5192 |
| 0.0811 | 33.11 | 4900 | 1.0900 | 0.5180 |
| 0.0825 | 33.78 | 5000 | 1.1271 | 0.5196 |
| 0.07 | 34.46 | 5100 | 1.1289 | 0.5141 |
| 0.0689 | 35.14 | 5200 | 1.0960 | 0.5101 |
| 0.068 | 35.81 | 5300 | 1.1377 | 0.5050 |
| 0.0776 | 36.49 | 5400 | 1.0880 | 0.5194 |
| 0.0642 | 37.16 | 5500 | 1.1027 | 0.5076 |
| 0.0607 | 37.84 | 5600 | 1.1293 | 0.5119 |
| 0.0607 | 38.51 | 5700 | 1.1229 | 0.5103 |
| 0.0545 | 39.19 | 5800 | 1.1168 | 0.5103 |
| 0.0562 | 39.86 | 5900 | 1.1206 | 0.5073 |
| 0.0484 | 40.54 | 6000 | 1.1710 | 0.5019 |
| 0.0499 | 41.22 | 6100 | 1.1511 | 0.5100 |
| 0.0455 | 41.89 | 6200 | 1.1488 | 0.5009 |
| 0.0475 | 42.57 | 6300 | 1.1196 | 0.4944 |
| 0.0413 | 43.24 | 6400 | 1.1654 | 0.4996 |
| 0.0389 | 43.92 | 6500 | 1.0961 | 0.4930 |
| 0.0428 | 44.59 | 6600 | 1.0955 | 0.4938 |
| 0.039 | 45.27 | 6700 | 1.1323 | 0.4955 |
| 0.0352 | 45.95 | 6800 | 1.1040 | 0.4930 |
| 0.0334 | 46.62 | 6900 | 1.1382 | 0.4942 |
| 0.0338 | 47.3 | 7000 | 1.1264 | 0.4911 |
| 0.0307 | 47.97 | 7100 | 1.1216 | 0.4881 |
| 0.0286 | 48.65 | 7200 | 1.1459 | 0.4894 |
| 0.0348 | 49.32 | 7300 | 1.1419 | 0.4906 |
| 0.0329 | 50.0 | 7400 | 1.1382 | 0.4895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
|
DrishtiSharma
| 2022-03-24T11:56:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"bas",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- bas
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-bas-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bas
metrics:
- name: Test WER
type: wer
value: 0.3566497929130234
- name: Test CER
type: cer
value: 0.1102657634184471
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bas
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bas-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5997
- Wer: 0.3870
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.7076 | 5.26 | 200 | 3.6361 | 1.0 |
| 3.1657 | 10.52 | 400 | 3.0101 | 1.0 |
| 2.3987 | 15.78 | 600 | 0.9125 | 0.6774 |
| 1.0079 | 21.05 | 800 | 0.6477 | 0.5352 |
| 0.7392 | 26.31 | 1000 | 0.5432 | 0.4929 |
| 0.6114 | 31.57 | 1200 | 0.5498 | 0.4639 |
| 0.5222 | 36.83 | 1400 | 0.5220 | 0.4561 |
| 0.4648 | 42.1 | 1600 | 0.5586 | 0.4289 |
| 0.4103 | 47.36 | 1800 | 0.5337 | 0.4082 |
| 0.3692 | 52.62 | 2000 | 0.5421 | 0.3861 |
| 0.3403 | 57.88 | 2200 | 0.5549 | 0.4096 |
| 0.3011 | 63.16 | 2400 | 0.5833 | 0.3925 |
| 0.2932 | 68.42 | 2600 | 0.5674 | 0.3815 |
| 0.2696 | 73.68 | 2800 | 0.5734 | 0.3889 |
| 0.2496 | 78.94 | 3000 | 0.5968 | 0.3985 |
| 0.2289 | 84.21 | 3200 | 0.5888 | 0.3893 |
| 0.2091 | 89.47 | 3400 | 0.5849 | 0.3852 |
| 0.2005 | 94.73 | 3600 | 0.5938 | 0.3875 |
| 0.1876 | 99.99 | 3800 | 0.5997 | 0.3870 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
Cdial/hausa-asr
|
Cdial
| 2022-03-24T11:56:34Z | 9 | 3 |
transformers
|
[
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ha",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ha
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ha
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Cdial/Hausa_xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ha
metrics:
- name: Test WER
type: wer
value: 0.20614541257934219
- name: Test CER
type: cer
value: 0.04358048053214061
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ha
metrics:
- name: Test WER
type: wer
value: 0.20614541257934219
- name: Test CER
type: cer
value: 0.04358048053214061
---
# Cdial/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
|
lgris/wav2vec2-xls-r-1b-portuguese-CORAA-3
|
lgris
| 2022-03-24T11:55:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- pt
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-xls-r-1b-portuguese-CORAA-3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 71.67
- name: Test CER
type: cer
value: 30.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 68.18
- name: Test CER
type: cer
value: 28.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.76
- name: Test CER
type: cer
value: 23.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-portuguese-CORAA-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 1.0029
- Wer: 0.6020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- training_steps: 30000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.0169 | 0.21 | 5000 | 1.9582 | 0.9283 |
| 1.8561 | 0.42 | 10000 | 1.6144 | 0.8554 |
| 1.6823 | 0.63 | 15000 | 1.4165 | 0.7710 |
| 1.52 | 0.84 | 20000 | 1.2441 | 0.7289 |
| 1.3757 | 1.05 | 25000 | 1.1061 | 0.6491 |
| 1.2377 | 1.26 | 30000 | 1.0029 | 0.6020 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
kapilkd13/xls-r-hi-test
|
kapilkd13
| 2022-03-24T11:55:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 38.18
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7346
- Wer: 1.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.36 | 400 | 1.4595 | 1.0039 |
| 4.7778 | 2.71 | 800 | 0.8082 | 1.0115 |
| 0.6408 | 4.07 | 1200 | 0.7032 | 1.0079 |
| 0.3937 | 5.42 | 1600 | 0.6889 | 1.0433 |
| 0.3 | 6.78 | 2000 | 0.6820 | 1.0069 |
| 0.3 | 8.14 | 2400 | 0.6670 | 1.0196 |
| 0.226 | 9.49 | 2800 | 0.7216 | 1.0422 |
| 0.197 | 10.85 | 3200 | 0.7669 | 1.0534 |
| 0.165 | 12.2 | 3600 | 0.7517 | 1.0200 |
| 0.1486 | 13.56 | 4000 | 0.7125 | 1.0357 |
| 0.1486 | 14.92 | 4400 | 0.7447 | 1.0347 |
| 0.122 | 16.27 | 4800 | 0.6899 | 1.0440 |
| 0.1069 | 17.63 | 5200 | 0.7212 | 1.0350 |
| 0.0961 | 18.98 | 5600 | 0.7417 | 1.0408 |
| 0.086 | 20.34 | 6000 | 0.7402 | 1.0356 |
| 0.086 | 21.69 | 6400 | 0.7761 | 1.0420 |
| 0.0756 | 23.05 | 6800 | 0.7346 | 1.0369 |
| 0.0666 | 24.41 | 7200 | 0.7506 | 1.0449 |
| 0.0595 | 25.76 | 7600 | 0.7319 | 1.0476 |
| 0.054 | 27.12 | 8000 | 0.7346 | 1.0479 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jcmc/wav2vec-1b-cv8-ir
|
jcmc
| 2022-03-24T11:55:44Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ga-IE",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ga-IE
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec-1b-cv8-ir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 43.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8445
- Wer: 0.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7135 | 31.24 | 500 | 0.9609 | 0.6926 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-chuvash
|
infinitejoy
| 2022-03-24T11:55:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"cv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- cv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- cv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Chuvash
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: cv
metrics:
- name: Test WER
type: wer
value: 60.31
- name: Test CER
type: cer
value: 15.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chuvash
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - CV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7651
- Wer: 0.6166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8032 | 8.77 | 500 | 0.8059 | 0.8352 |
| 1.2608 | 17.54 | 1000 | 0.5828 | 0.6769 |
| 1.1337 | 26.32 | 1500 | 0.6892 | 0.6908 |
| 1.0457 | 35.09 | 2000 | 0.7077 | 0.6781 |
| 0.97 | 43.86 | 2500 | 0.5993 | 0.6228 |
| 0.8767 | 52.63 | 3000 | 0.7213 | 0.6604 |
| 0.8223 | 61.4 | 3500 | 0.8161 | 0.6968 |
| 0.7441 | 70.18 | 4000 | 0.7057 | 0.6184 |
| 0.7011 | 78.95 | 4500 | 0.7027 | 0.6024 |
| 0.6542 | 87.72 | 5000 | 0.7092 | 0.5979 |
| 0.6081 | 96.49 | 5500 | 0.7917 | 0.6324 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
hf-test/xls-r-300m-sv-cv8
|
hf-test
| 2022-03-24T11:55:37Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Swedish - CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 17.1
- name: Test CER
type: cer
value: 5.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 26.92
- name: Test CER
type: cer
value: 12.53
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
**Without LM**:
- Wer: 0.2465
- Cer: 0.0717
**With LM**:
- Wer: 0.1710
- Cer: 0.0569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3224 | 1.37 | 500 | 3.2676 | 1.0 |
| 2.9319 | 2.74 | 1000 | 2.9287 | 1.0000 |
| 2.1173 | 4.11 | 1500 | 1.1478 | 0.8788 |
| 1.6973 | 5.48 | 2000 | 0.6749 | 0.6547 |
| 1.5865 | 6.85 | 2500 | 0.5500 | 0.5634 |
| 1.5094 | 8.22 | 3000 | 0.4840 | 0.5430 |
| 1.4644 | 9.59 | 3500 | 0.4844 | 0.4142 |
| 1.4061 | 10.96 | 4000 | 0.4356 | 0.3808 |
| 1.3584 | 12.33 | 4500 | 0.4192 | 0.3698 |
| 1.3438 | 13.7 | 5000 | 0.3980 | 0.3584 |
| 1.3332 | 15.07 | 5500 | 0.3896 | 0.3572 |
| 1.3025 | 16.44 | 6000 | 0.3835 | 0.3487 |
| 1.2979 | 17.81 | 6500 | 0.3781 | 0.3417 |
| 1.2736 | 19.18 | 7000 | 0.3734 | 0.3270 |
| 1.2415 | 20.55 | 7500 | 0.3637 | 0.3316 |
| 1.2255 | 21.92 | 8000 | 0.3546 | 0.3147 |
| 1.2193 | 23.29 | 8500 | 0.3524 | 0.3196 |
| 1.2104 | 24.66 | 9000 | 0.3403 | 0.3097 |
| 1.1965 | 26.03 | 9500 | 0.3508 | 0.3093 |
| 1.1976 | 27.4 | 10000 | 0.3419 | 0.3071 |
| 1.182 | 28.77 | 10500 | 0.3364 | 0.2963 |
| 1.158 | 30.14 | 11000 | 0.3338 | 0.2932 |
| 1.1414 | 31.51 | 11500 | 0.3376 | 0.2940 |
| 1.1402 | 32.88 | 12000 | 0.3370 | 0.2891 |
| 1.1213 | 34.25 | 12500 | 0.3201 | 0.2874 |
| 1.1207 | 35.62 | 13000 | 0.3261 | 0.2826 |
| 1.1074 | 36.98 | 13500 | 0.3117 | 0.2786 |
| 1.0818 | 38.36 | 14000 | 0.3194 | 0.2776 |
| 1.0889 | 39.73 | 14500 | 0.3188 | 0.2738 |
| 1.0672 | 41.1 | 15000 | 0.3196 | 0.2773 |
| 1.0838 | 42.47 | 15500 | 0.3130 | 0.2739 |
| 1.0553 | 43.83 | 16000 | 0.3165 | 0.2704 |
| 1.0786 | 45.21 | 16500 | 0.3108 | 0.2706 |
| 1.0546 | 46.57 | 17000 | 0.3102 | 0.2677 |
| 1.0425 | 47.94 | 17500 | 0.3115 | 0.2679 |
| 1.0398 | 49.31 | 18000 | 0.3131 | 0.2666 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-as-CV8-v1
|
emre
| 2022-03-24T11:55:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"as",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: as
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-as-CV8-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: as
metrics:
- name: Test WER
type: wer
value: 100.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-as-CV8-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Br-small
|
emre
| 2022-03-24T11:55:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: br
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Br-small
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice br
type: common_voice
args: br
metrics:
- name: Test WER
type: wer
value: 66.75
---
# wav2vec2-xls-r-300m-Br-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0573
- Wer: 0.6675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7464 | 2.79 | 400 | 1.7474 | 1.1018 |
| 1.1117 | 5.59 | 800 | 0.9434 | 0.8697 |
| 0.6481 | 8.39 | 1200 | 0.9251 | 0.7910 |
| 0.4754 | 11.19 | 1600 | 0.9208 | 0.7412 |
| 0.3602 | 13.98 | 2000 | 0.9284 | 0.7232 |
| 0.2873 | 16.78 | 2400 | 0.9299 | 0.6940 |
| 0.2386 | 19.58 | 2800 | 1.0182 | 0.6927 |
| 0.1971 | 22.38 | 3200 | 1.0456 | 0.6898 |
| 0.1749 | 25.17 | 3600 | 1.0208 | 0.6769 |
| 0.1487 | 27.97 | 4000 | 1.0573 | 0.6675 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
comodoro/wav2vec2-xls-r-300m-sk-cv8
|
comodoro
| 2022-03-24T11:55:26Z | 34,804 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"sk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sk
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: Slovak comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sk
metrics:
- name: Test WER
type: wer
value: 49.6
- name: Test CER
type: cer
value: 13.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sk
metrics:
- name: Test WER
type: wer
value: 81.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sk
metrics:
- name: Test WER
type: wer
value: 80.26
---
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set:
- WER: 0.49575384615384616
- CER: 0.13333333333333333
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-sk-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sk-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config sk
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-4
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
StephennFernandes/XLS-R-marathi
|
StephennFernandes
| 2022-03-24T11:55:17Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"mr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- generated_from_trainer
- hf-asr-leaderboard
model-index:
- name: XLS-R-marathi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
RuudVelo/wav2vec2-large-xls-r-1b-nl-lm
|
RuudVelo
| 2022-03-24T11:55:12Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-nl-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 2.89
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 27.27
- name: Test CER
type: cer
value: 13.23
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 27.67
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-nl-lm
This model is a fine-tuned version of [wav2vec2-large-xls-r-1b-nl-lm](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice 8 dataset.
It achieves the following results on the test set:
- Loss: 0.1479
- Wer: 0.1156
Note that the above test results come from the original model without LM (language model) which can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl. The results with the LM model can be found on the right side of this model card.
## Model description
Model RuudVelo/wav2vec2-large-xls-r-1b-nl which has been improved with a KenLM 5-gram.
## Intended uses & limitations
More information needed
## Training and evaluation data
Common Voice 8 nl dataset has been used for the model
## Training procedure
### Training hyperparameters
Parameters can be found in the run.sh file at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
RASMUS/wav2vec2-xlsr-1b-et
|
RASMUS
| 2022-03-24T11:55:09Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"audio",
"speech",
"robust-speech-event",
"hf-asr-leaderboard",
"et",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: et
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_8_0
- audio
- automatic-speech-recognition
- speech
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: XLS-R 1B Wav2Vec2 Estonian by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: et
metrics:
- name: Test WER
type: wer
value: 20.12
- name: Test CER
type: cer
value: 3.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 40.77
- name: Test CER
type: cer
value: 12.32
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 41.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-et-lm-1B
This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.
It achieves the following results on the test set:
(Loss reported with last eval step at step 2000/2040 during training)
- Loss: 0.2150
- Wer: 0.2012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HarrisDePerceptron/xlsr-large-53-ur
|
HarrisDePerceptron
| 2022-03-24T11:54:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ur
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- name: Test WER
type: wer
value: 62.47
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8888
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.1224 | 1.96 | 100 | 3.5429 | 1.0 |
| 3.2411 | 3.92 | 200 | 3.1786 | 1.0 |
| 3.1283 | 5.88 | 300 | 3.0571 | 1.0 |
| 3.0044 | 7.84 | 400 | 2.9560 | 0.9996 |
| 2.9388 | 9.8 | 500 | 2.8977 | 1.0011 |
| 2.86 | 11.76 | 600 | 2.6944 | 0.9952 |
| 2.5538 | 13.73 | 700 | 2.0967 | 0.9435 |
| 2.1214 | 15.69 | 800 | 1.4816 | 0.8428 |
| 1.8136 | 17.65 | 900 | 1.2459 | 0.8048 |
| 1.6795 | 19.61 | 1000 | 1.1232 | 0.7649 |
| 1.5571 | 21.57 | 1100 | 1.0510 | 0.7432 |
| 1.4975 | 23.53 | 1200 | 1.0298 | 0.6963 |
| 1.4485 | 25.49 | 1300 | 0.9775 | 0.7074 |
| 1.3924 | 27.45 | 1400 | 0.9798 | 0.6956 |
| 1.3604 | 29.41 | 1500 | 0.9345 | 0.7092 |
| 1.3224 | 31.37 | 1600 | 0.9535 | 0.6830 |
| 1.2816 | 33.33 | 1700 | 0.9178 | 0.6679 |
| 1.2623 | 35.29 | 1800 | 0.9249 | 0.6679 |
| 1.2421 | 37.25 | 1900 | 0.9124 | 0.6734 |
| 1.2208 | 39.22 | 2000 | 0.8962 | 0.6664 |
| 1.2145 | 41.18 | 2100 | 0.8903 | 0.6734 |
| 1.1888 | 43.14 | 2200 | 0.8883 | 0.6708 |
| 1.1933 | 45.1 | 2300 | 0.8928 | 0.6723 |
| 1.1838 | 47.06 | 2400 | 0.8868 | 0.6679 |
| 1.1634 | 49.02 | 2500 | 0.8886 | 0.6657 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-300m-kk-n2
|
DrishtiSharma
| 2022-03-24T11:54:53Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"kk",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- kk
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- kk
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-kk-n2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: tt
metrics:
- name: Test WER
type: wer
value: 0.4355
- name: Test CER
type: cer
value: 0.10469915859660263
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7149
- Wer: 0.451
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Kazakh language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000222
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.6799 | 9.09 | 200 | 3.6119 | 1.0 |
| 3.1332 | 18.18 | 400 | 2.5352 | 1.005 |
| 1.0465 | 27.27 | 600 | 0.6169 | 0.682 |
| 0.3452 | 36.36 | 800 | 0.6572 | 0.607 |
| 0.2575 | 45.44 | 1000 | 0.6527 | 0.578 |
| 0.2088 | 54.53 | 1200 | 0.6828 | 0.551 |
| 0.158 | 63.62 | 1400 | 0.7074 | 0.5575 |
| 0.1309 | 72.71 | 1600 | 0.6523 | 0.5595 |
| 0.1074 | 81.8 | 1800 | 0.7262 | 0.5415 |
| 0.087 | 90.89 | 2000 | 0.7199 | 0.521 |
| 0.0711 | 99.98 | 2200 | 0.7113 | 0.523 |
| 0.0601 | 109.09 | 2400 | 0.6863 | 0.496 |
| 0.0451 | 118.18 | 2600 | 0.6998 | 0.483 |
| 0.0378 | 127.27 | 2800 | 0.6971 | 0.4615 |
| 0.0319 | 136.36 | 3000 | 0.7119 | 0.4475 |
| 0.0305 | 145.44 | 3200 | 0.7181 | 0.459 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5
|
DrishtiSharma
| 2022-03-24T11:54:47Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"or",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- or
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-or-d5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: or
metrics:
- name: Test WER
type: wer
value: 0.579136690647482
- name: Test CER
type: cer
value: 0.1572148018392818
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: or
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-d5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9571
- Wer: 0.5450
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.2958 | 12.5 | 300 | 4.9014 | 1.0 |
| 3.4065 | 25.0 | 600 | 3.5150 | 1.0 |
| 1.5402 | 37.5 | 900 | 0.8356 | 0.7249 |
| 0.6049 | 50.0 | 1200 | 0.7754 | 0.6349 |
| 0.4074 | 62.5 | 1500 | 0.7994 | 0.6217 |
| 0.3097 | 75.0 | 1800 | 0.8815 | 0.5985 |
| 0.2593 | 87.5 | 2100 | 0.8532 | 0.5754 |
| 0.2097 | 100.0 | 2400 | 0.9077 | 0.5648 |
| 0.1784 | 112.5 | 2700 | 0.9047 | 0.5668 |
| 0.1567 | 125.0 | 3000 | 0.9019 | 0.5728 |
| 0.1315 | 137.5 | 3300 | 0.9295 | 0.5827 |
| 0.1125 | 150.0 | 3600 | 0.9256 | 0.5681 |
| 0.1035 | 162.5 | 3900 | 0.9148 | 0.5496 |
| 0.0901 | 175.0 | 4200 | 0.9480 | 0.5483 |
| 0.0817 | 187.5 | 4500 | 0.9799 | 0.5516 |
| 0.079 | 200.0 | 4800 | 0.9571 | 0.5450 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2
|
DrishtiSharma
| 2022-03-24T11:54:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mr",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mr
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-mr-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mr
metrics:
- name: Test WER
type: wer
value: 0.49378259125551544
- name: Test CER
type: cer
value: 0.12470799640610962
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mr
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8729
- Wer: 0.4942
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1
Note: Marathi language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.4934 | 9.09 | 200 | 3.7326 | 1.0 |
| 3.4234 | 18.18 | 400 | 3.3383 | 0.9996 |
| 3.2628 | 27.27 | 600 | 2.7482 | 0.9992 |
| 1.7743 | 36.36 | 800 | 0.6755 | 0.6787 |
| 1.0346 | 45.45 | 1000 | 0.6067 | 0.6193 |
| 0.8137 | 54.55 | 1200 | 0.6228 | 0.5612 |
| 0.6637 | 63.64 | 1400 | 0.5976 | 0.5495 |
| 0.5563 | 72.73 | 1600 | 0.7009 | 0.5383 |
| 0.4844 | 81.82 | 1800 | 0.6662 | 0.5287 |
| 0.4057 | 90.91 | 2000 | 0.6911 | 0.5303 |
| 0.3582 | 100.0 | 2200 | 0.7207 | 0.5327 |
| 0.3163 | 109.09 | 2400 | 0.7107 | 0.5118 |
| 0.2761 | 118.18 | 2600 | 0.7538 | 0.5118 |
| 0.2415 | 127.27 | 2800 | 0.7850 | 0.5178 |
| 0.2127 | 136.36 | 3000 | 0.8016 | 0.5034 |
| 0.1873 | 145.45 | 3200 | 0.8302 | 0.5187 |
| 0.1723 | 154.55 | 3400 | 0.9085 | 0.5223 |
| 0.1498 | 163.64 | 3600 | 0.8396 | 0.5126 |
| 0.1425 | 172.73 | 3800 | 0.8776 | 0.5094 |
| 0.1258 | 181.82 | 4000 | 0.8651 | 0.5014 |
| 0.117 | 190.91 | 4200 | 0.8772 | 0.4970 |
| 0.1093 | 200.0 | 4400 | 0.8729 | 0.4942 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2
|
DrishtiSharma
| 2022-03-24T11:54:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- br
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-br-d2
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: br
metrics:
- type: wer
value: 0.49770598355954887
name: Test WER
- name: Test CER
type: cer
value: 0.18090500890299605
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: br
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-br-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1257
- Wer: 0.4631
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Breton language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00034
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.0379 | 0.68 | 100 | 5.6808 | 1.0 |
| 3.9145 | 1.35 | 200 | 3.1970 | 1.0 |
| 3.0293 | 2.03 | 300 | 2.9513 | 1.0 |
| 2.0927 | 2.7 | 400 | 1.4545 | 0.8887 |
| 1.1556 | 3.38 | 500 | 1.0966 | 0.7564 |
| 0.9628 | 4.05 | 600 | 0.9808 | 0.7364 |
| 0.7869 | 4.73 | 700 | 1.0488 | 0.7355 |
| 0.703 | 5.41 | 800 | 0.9500 | 0.6881 |
| 0.6657 | 6.08 | 900 | 0.9309 | 0.6259 |
| 0.5663 | 6.76 | 1000 | 0.9133 | 0.6357 |
| 0.496 | 7.43 | 1100 | 0.9890 | 0.6028 |
| 0.4748 | 8.11 | 1200 | 0.9469 | 0.5894 |
| 0.4135 | 8.78 | 1300 | 0.9270 | 0.6045 |
| 0.3579 | 9.46 | 1400 | 0.8818 | 0.5708 |
| 0.353 | 10.14 | 1500 | 0.9244 | 0.5781 |
| 0.334 | 10.81 | 1600 | 0.9009 | 0.5638 |
| 0.2917 | 11.49 | 1700 | 1.0132 | 0.5828 |
| 0.29 | 12.16 | 1800 | 0.9696 | 0.5668 |
| 0.2691 | 12.84 | 1900 | 0.9811 | 0.5455 |
| 0.25 | 13.51 | 2000 | 0.9951 | 0.5624 |
| 0.2467 | 14.19 | 2100 | 0.9653 | 0.5573 |
| 0.2242 | 14.86 | 2200 | 0.9714 | 0.5378 |
| 0.2066 | 15.54 | 2300 | 0.9829 | 0.5394 |
| 0.2075 | 16.22 | 2400 | 1.0547 | 0.5520 |
| 0.1923 | 16.89 | 2500 | 1.0014 | 0.5397 |
| 0.1919 | 17.57 | 2600 | 0.9978 | 0.5477 |
| 0.1908 | 18.24 | 2700 | 1.1064 | 0.5397 |
| 0.157 | 18.92 | 2800 | 1.0629 | 0.5238 |
| 0.159 | 19.59 | 2900 | 1.0642 | 0.5321 |
| 0.1652 | 20.27 | 3000 | 1.0207 | 0.5328 |
| 0.141 | 20.95 | 3100 | 0.9948 | 0.5312 |
| 0.1417 | 21.62 | 3200 | 1.0338 | 0.5328 |
| 0.1514 | 22.3 | 3300 | 1.0513 | 0.5313 |
| 0.1365 | 22.97 | 3400 | 1.0357 | 0.5291 |
| 0.1319 | 23.65 | 3500 | 1.0587 | 0.5167 |
| 0.1298 | 24.32 | 3600 | 1.0636 | 0.5236 |
| 0.1245 | 25.0 | 3700 | 1.1367 | 0.5280 |
| 0.1114 | 25.68 | 3800 | 1.0633 | 0.5200 |
| 0.1088 | 26.35 | 3900 | 1.0495 | 0.5210 |
| 0.1175 | 27.03 | 4000 | 1.0897 | 0.5095 |
| 0.1043 | 27.7 | 4100 | 1.0580 | 0.5309 |
| 0.0951 | 28.38 | 4200 | 1.0448 | 0.5067 |
| 0.1011 | 29.05 | 4300 | 1.0665 | 0.5137 |
| 0.0889 | 29.73 | 4400 | 1.0579 | 0.5026 |
| 0.0833 | 30.41 | 4500 | 1.0740 | 0.5037 |
| 0.0889 | 31.08 | 4600 | 1.0933 | 0.5083 |
| 0.0784 | 31.76 | 4700 | 1.0715 | 0.5089 |
| 0.0767 | 32.43 | 4800 | 1.0658 | 0.5049 |
| 0.0769 | 33.11 | 4900 | 1.1118 | 0.4979 |
| 0.0722 | 33.78 | 5000 | 1.1413 | 0.4986 |
| 0.0709 | 34.46 | 5100 | 1.0706 | 0.4885 |
| 0.0664 | 35.14 | 5200 | 1.1217 | 0.4884 |
| 0.0648 | 35.81 | 5300 | 1.1298 | 0.4941 |
| 0.0657 | 36.49 | 5400 | 1.1330 | 0.4920 |
| 0.0582 | 37.16 | 5500 | 1.0598 | 0.4835 |
| 0.0602 | 37.84 | 5600 | 1.1097 | 0.4943 |
| 0.0598 | 38.51 | 5700 | 1.0976 | 0.4876 |
| 0.0547 | 39.19 | 5800 | 1.0734 | 0.4825 |
| 0.0561 | 39.86 | 5900 | 1.0926 | 0.4850 |
| 0.0516 | 40.54 | 6000 | 1.1579 | 0.4751 |
| 0.0478 | 41.22 | 6100 | 1.1384 | 0.4706 |
| 0.0396 | 41.89 | 6200 | 1.1462 | 0.4739 |
| 0.0472 | 42.57 | 6300 | 1.1277 | 0.4732 |
| 0.0447 | 43.24 | 6400 | 1.1517 | 0.4752 |
| 0.0423 | 43.92 | 6500 | 1.1219 | 0.4784 |
| 0.0426 | 44.59 | 6600 | 1.1311 | 0.4724 |
| 0.0391 | 45.27 | 6700 | 1.1135 | 0.4692 |
| 0.0362 | 45.95 | 6800 | 1.0878 | 0.4645 |
| 0.0329 | 46.62 | 6900 | 1.1137 | 0.4668 |
| 0.0356 | 47.3 | 7000 | 1.1233 | 0.4687 |
| 0.0328 | 47.97 | 7100 | 1.1238 | 0.4653 |
| 0.0323 | 48.65 | 7200 | 1.1307 | 0.4646 |
| 0.0325 | 49.32 | 7300 | 1.1242 | 0.4645 |
| 0.03 | 50.0 | 7400 | 1.1257 | 0.4631 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7
|
DrishtiSharma
| 2022-03-24T11:54:32Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ab
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xls-r-300m-ab-CV7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ab
metrics:
- name: Test WER
type: wer
value: 0.5291160452450775
- name: Test CER
type: cer
value: 0.10630270750110964
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ab
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5620
- Wer: 0.5651
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common_voice_7_0 --config ab --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6445 | 13.64 | 300 | 4.3963 | 1.0 |
| 3.6459 | 27.27 | 600 | 3.2267 | 1.0 |
| 3.0978 | 40.91 | 900 | 3.0927 | 1.0 |
| 2.8357 | 54.55 | 1200 | 2.1462 | 1.0029 |
| 1.2723 | 68.18 | 1500 | 0.6747 | 0.6996 |
| 0.6528 | 81.82 | 1800 | 0.5928 | 0.6422 |
| 0.4905 | 95.45 | 2100 | 0.5587 | 0.5681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
smangrul/xls-r-mr-model
|
smangrul
| 2022-03-24T11:54:20Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"openslr",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:openslr",
"dataset:shivam/marathi_samanantar_processed",
"dataset:shivam/marathi_pib_processed",
"dataset:opus100",
"dataset:tatoeba",
"dataset:tapaco",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- openslr
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
- openslr
- shivam/marathi_samanantar_processed
- shivam/marathi_pib_processed
- opus100
- tatoeba
- tapaco
model-index:
- name: wav2vec2-large-xls-r-300m-mr
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mr
metrics:
- type: wer
value: 31.05
name: Test WER
- name: Test CER
type: cer
value: 6.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR and OPENSLR - SLR64 - MR datasets.
It achieves the following results on the evaluation set:
- Loss: 0.494580
- Wer: 0.401524
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM |
|---|---|
| 40.513437625350984 | 31.04693140794224 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|---|---|---|---|
| 400 | 3.794000 | 3.532227 | 1.000000 |
| 800 | 3.362400 | 3.359044 | 1.000000 |
| 1200 | 2.293900 | 1.011279 | 0.829924 |
| 1600 | 1.233000 | 0.502743 | 0.593662 |
| 2000 | 0.962600 | 0.412519 | 0.496992 |
| 2400 | 0.831800 | 0.402903 | 0.493783 |
| 2800 | 0.737000 | 0.389773 | 0.469314 |
| 3200 | 0.677100 | 0.373987 | 0.436021 |
| 3600 | 0.634400 | 0.383823 | 0.432010 |
| 4000 | 0.586000 | 0.375610 | 0.419575 |
| 4400 | 0.561000 | 0.387891 | 0.418371 |
| 4800 | 0.518500 | 0.386357 | 0.417569 |
| 5200 | 0.515300 | 0.415069 | 0.430004 |
| 5600 | 0.478100 | 0.399211 | 0.408744 |
| 6000 | 0.468100 | 0.424542 | 0.402327 |
| 6400 | 0.439400 | 0.430979 | 0.410750 |
| 6800 | 0.429600 | 0.427700 | 0.409146 |
| 7200 | 0.400300 | 0.451111 | 0.419976 |
| 7600 | 0.395100 | 0.463446 | 0.405134 |
| 8000 | 0.381800 | 0.454752 | 0.407942 |
| 8400 | 0.371500 | 0.461547 | 0.404733 |
| 8800 | 0.362500 | 0.461543 | 0.411151 |
| 9200 | 0.338200 | 0.468299 | 0.417168 |
| 9600 | 0.338800 | 0.480989 | 0.412355 |
| 10000 | 0.317600 | 0.475700 | 0.410750 |
| 10400 | 0.315100 | 0.478920 | 0.403530 |
| 10800 | 0.296200 | 0.480600 | 0.398315 |
| 11200 | 0.299000 | 0.477083 | 0.393502 |
| 11600 | 0.290000 | 0.465646 | 0.393903 |
| 12000 | 0.290900 | 0.490041 | 0.405937 |
| 12400 | 0.275600 | 0.489354 | 0.399519 |
| 12800 | 0.272600 | 0.494580 | 0.395909 |
| 13200 | 0.265900 | 0.497918 | 0.397112 |
| 13600 | 0.266300 | 0.498627 | 0.397513 |
| 14000 | 0.259600 | 0.504610 | 0.401524 |
#### Evaluation Commands
To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id smangrul/xls-r-mr-model --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
shpotes/xls-r-eus
|
shpotes
| 2022-03-24T11:54:17Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"et",
"hf-asr-leaderboard",
"eu",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- eu
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- et
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-eus
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: eu
metrics:
- name: Test WER
type: wer
value: 0.17871523648578164
- name: Test CER
type: cer
value: 0.032624506085144
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EU dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Wer: 0.1787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2548 | 4.24 | 500 | 0.2470 | 0.3663 |
| 0.1435 | 8.47 | 1000 | 0.2000 | 0.2791 |
| 0.1158 | 12.71 | 1500 | 0.2030 | 0.2652 |
| 0.1094 | 16.95 | 2000 | 0.2096 | 0.2605 |
| 0.1004 | 21.19 | 2500 | 0.2150 | 0.2477 |
| 0.0945 | 25.42 | 3000 | 0.2072 | 0.2369 |
| 0.0844 | 29.66 | 3500 | 0.1981 | 0.2328 |
| 0.0877 | 33.89 | 4000 | 0.2041 | 0.2425 |
| 0.0741 | 38.14 | 4500 | 0.2353 | 0.2421 |
| 0.0676 | 42.37 | 5000 | 0.2092 | 0.2213 |
| 0.0623 | 46.61 | 5500 | 0.2217 | 0.2250 |
| 0.0574 | 50.84 | 6000 | 0.2152 | 0.2179 |
| 0.0583 | 55.08 | 6500 | 0.2207 | 0.2186 |
| 0.0488 | 59.32 | 7000 | 0.2225 | 0.2159 |
| 0.0456 | 63.56 | 7500 | 0.2293 | 0.2031 |
| 0.041 | 67.79 | 8000 | 0.2277 | 0.2013 |
| 0.0379 | 72.03 | 8500 | 0.2287 | 0.1991 |
| 0.0381 | 76.27 | 9000 | 0.2233 | 0.1954 |
| 0.0308 | 80.51 | 9500 | 0.2195 | 0.1835 |
| 0.0291 | 84.74 | 10000 | 0.2266 | 0.1825 |
| 0.0266 | 88.98 | 10500 | 0.2285 | 0.1801 |
| 0.0266 | 93.22 | 11000 | 0.2292 | 0.1801 |
| 0.0262 | 97.46 | 11500 | 0.2278 | 0.1788 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
shpotes/xls-r-et
|
shpotes
| 2022-03-24T11:54:15Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"et",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- et
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- robust-speech-event
- et
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: et
metrics:
- name: Test WER
type: wer
value: 0.34753420299077314
- name: Test CER
type: cer
value: 0.07542956089330906
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 47.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 54.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ET dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4835
- Wer: 0.3475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3825 | 12.5 | 500 | 0.4022 | 0.5059 |
| 0.1592 | 25.0 | 1000 | 0.4585 | 0.4456 |
| 0.1215 | 37.5 | 1500 | 0.4550 | 0.4164 |
| 0.0972 | 50.0 | 2000 | 0.4725 | 0.4088 |
| 0.0731 | 62.5 | 2500 | 0.4568 | 0.3824 |
| 0.0527 | 75.0 | 3000 | 0.4712 | 0.3653 |
| 0.0428 | 87.5 | 3500 | 0.4813 | 0.3520 |
| 0.0383 | 100.0 | 4000 | 0.4835 | 0.3475 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-Basaa
|
sammy786
| 2022-03-24T11:54:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"bas",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- bas
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bas
metrics:
- name: Test WER
type: wer
value: 41.23
- name: Test CER
type: cer
value: 13.54
---
# sammy786/wav2vec2-xlsr-basaa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - bas dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 21.39
- Wer: 30.99
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 6.734100 | 1.605006 | 0.980456 |
| 400 | 1.011200 | 0.364686 | 0.442997 |
| 600 | 0.709300 | 0.300204 | 0.377850 |
| 800 | 0.469800 | 0.315612 | 0.405537 |
| 1000 | 0.464700 | 0.352494 | 0.372964 |
| 1200 | 0.421900 | 0.342533 | 0.368078 |
| 1400 | 0.401900 | 0.351398 | 0.343648 |
| 1600 | 0.429800 | 0.350570 | 0.348534 |
| 1800 | 0.352600 | 0.356601 | 0.358306 |
| 2000 | 0.387200 | 0.355814 | 0.356678 |
| 2200 | 0.362400 | 0.345573 | 0.355049 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-basaa --dataset mozilla-foundation/common_voice_8_0 --config bas --split test
```
|
lgris/wav2vec2-xls-r-300m-gn-cv8
|
lgris
| 2022-03-24T11:54:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gn",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 69.05
- name: Test CER
type: cer
value: 14.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 69.05
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9392
- Wer: 0.7033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 20.0601 | 5.54 | 100 | 5.1622 | 1.0 |
| 3.7052 | 11.11 | 200 | 3.2869 | 1.0 |
| 3.3275 | 16.65 | 300 | 3.2162 | 1.0 |
| 3.2984 | 22.22 | 400 | 3.1638 | 1.0 |
| 3.1111 | 27.76 | 500 | 2.5541 | 1.0 |
| 2.238 | 33.32 | 600 | 1.2198 | 0.9616 |
| 1.5284 | 38.86 | 700 | 0.9571 | 0.8593 |
| 1.2735 | 44.43 | 800 | 0.8719 | 0.8363 |
| 1.1269 | 49.97 | 900 | 0.8334 | 0.7954 |
| 1.0427 | 55.54 | 1000 | 0.7700 | 0.7749 |
| 1.0152 | 61.11 | 1100 | 0.7747 | 0.7877 |
| 0.943 | 66.65 | 1200 | 0.7151 | 0.7442 |
| 0.9132 | 72.22 | 1300 | 0.7224 | 0.7289 |
| 0.8397 | 77.76 | 1400 | 0.7354 | 0.7059 |
| 0.8577 | 83.32 | 1500 | 0.7285 | 0.7263 |
| 0.7931 | 88.86 | 1600 | 0.7863 | 0.7084 |
| 0.7995 | 94.43 | 1700 | 0.7562 | 0.6880 |
| 0.799 | 99.97 | 1800 | 0.7905 | 0.7059 |
| 0.7373 | 105.54 | 1900 | 0.7791 | 0.7161 |
| 0.749 | 111.11 | 2000 | 0.8125 | 0.7161 |
| 0.6925 | 116.65 | 2100 | 0.7722 | 0.6905 |
| 0.7034 | 122.22 | 2200 | 0.8989 | 0.7136 |
| 0.6745 | 127.76 | 2300 | 0.8270 | 0.6982 |
| 0.6837 | 133.32 | 2400 | 0.8569 | 0.7161 |
| 0.6689 | 138.86 | 2500 | 0.8339 | 0.6982 |
| 0.6471 | 144.43 | 2600 | 0.8441 | 0.7110 |
| 0.615 | 149.97 | 2700 | 0.9038 | 0.7212 |
| 0.6477 | 155.54 | 2800 | 0.9089 | 0.7059 |
| 0.6047 | 161.11 | 2900 | 0.9149 | 0.7059 |
| 0.5613 | 166.65 | 3000 | 0.8582 | 0.7263 |
| 0.6017 | 172.22 | 3100 | 0.8787 | 0.7084 |
| 0.5546 | 177.76 | 3200 | 0.8753 | 0.6957 |
| 0.5747 | 183.32 | 3300 | 0.9167 | 0.7212 |
| 0.5535 | 188.86 | 3400 | 0.8448 | 0.6905 |
| 0.5331 | 194.43 | 3500 | 0.8644 | 0.7161 |
| 0.5428 | 199.97 | 3600 | 0.8730 | 0.7033 |
| 0.5219 | 205.54 | 3700 | 0.9047 | 0.6982 |
| 0.5158 | 211.11 | 3800 | 0.8706 | 0.7033 |
| 0.5107 | 216.65 | 3900 | 0.9139 | 0.7084 |
| 0.4903 | 222.22 | 4000 | 0.9456 | 0.7315 |
| 0.4772 | 227.76 | 4100 | 0.9475 | 0.7161 |
| 0.4713 | 233.32 | 4200 | 0.9237 | 0.7059 |
| 0.4743 | 238.86 | 4300 | 0.9305 | 0.6957 |
| 0.4705 | 244.43 | 4400 | 0.9561 | 0.7110 |
| 0.4908 | 249.97 | 4500 | 0.9389 | 0.7084 |
| 0.4717 | 255.54 | 4600 | 0.9234 | 0.6982 |
| 0.4462 | 261.11 | 4700 | 0.9323 | 0.6957 |
| 0.4556 | 266.65 | 4800 | 0.9432 | 0.7033 |
| 0.4691 | 272.22 | 4900 | 0.9389 | 0.7059 |
| 0.4601 | 277.76 | 5000 | 0.9392 | 0.7033 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-300m-gn-cv8-4
|
lgris
| 2022-03-24T11:54:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gn",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8-4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 68.45
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5805
- Wer: 0.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 9.2216 | 16.65 | 300 | 3.2771 | 1.0 |
| 3.1804 | 33.32 | 600 | 2.2869 | 1.0 |
| 1.5856 | 49.97 | 900 | 0.9573 | 0.8772 |
| 1.0299 | 66.65 | 1200 | 0.9044 | 0.8082 |
| 0.8916 | 83.32 | 1500 | 0.9478 | 0.8056 |
| 0.8451 | 99.97 | 1800 | 0.8814 | 0.8107 |
| 0.7649 | 116.65 | 2100 | 0.9897 | 0.7826 |
| 0.7185 | 133.32 | 2400 | 0.9988 | 0.7621 |
| 0.6595 | 149.97 | 2700 | 1.0607 | 0.7749 |
| 0.6211 | 166.65 | 3000 | 1.1826 | 0.7877 |
| 0.59 | 183.32 | 3300 | 1.1060 | 0.7826 |
| 0.5383 | 199.97 | 3600 | 1.1826 | 0.7852 |
| 0.5205 | 216.65 | 3900 | 1.2148 | 0.8261 |
| 0.4786 | 233.32 | 4200 | 1.2710 | 0.7928 |
| 0.4482 | 249.97 | 4500 | 1.1943 | 0.7980 |
| 0.4149 | 266.65 | 4800 | 1.2449 | 0.8031 |
| 0.3904 | 283.32 | 5100 | 1.3100 | 0.7928 |
| 0.3619 | 299.97 | 5400 | 1.3125 | 0.7596 |
| 0.3496 | 316.65 | 5700 | 1.3699 | 0.7877 |
| 0.3277 | 333.32 | 6000 | 1.4344 | 0.8031 |
| 0.2958 | 349.97 | 6300 | 1.4093 | 0.7980 |
| 0.2883 | 366.65 | 6600 | 1.3296 | 0.7570 |
| 0.2598 | 383.32 | 6900 | 1.4026 | 0.7980 |
| 0.2564 | 399.97 | 7200 | 1.4847 | 0.8031 |
| 0.2408 | 416.65 | 7500 | 1.4896 | 0.8107 |
| 0.2266 | 433.32 | 7800 | 1.4232 | 0.7698 |
| 0.224 | 449.97 | 8100 | 1.5560 | 0.7903 |
| 0.2038 | 466.65 | 8400 | 1.5355 | 0.7724 |
| 0.1948 | 483.32 | 8700 | 1.4624 | 0.7621 |
| 0.1995 | 499.97 | 9000 | 1.5808 | 0.7724 |
| 0.1864 | 516.65 | 9300 | 1.5653 | 0.7698 |
| 0.18 | 533.32 | 9600 | 1.4868 | 0.7494 |
| 0.1689 | 549.97 | 9900 | 1.5379 | 0.7749 |
| 0.1624 | 566.65 | 10200 | 1.5936 | 0.7749 |
| 0.1537 | 583.32 | 10500 | 1.6436 | 0.7801 |
| 0.1455 | 599.97 | 10800 | 1.6401 | 0.7673 |
| 0.1437 | 616.65 | 11100 | 1.6069 | 0.7673 |
| 0.1452 | 633.32 | 11400 | 1.6041 | 0.7519 |
| 0.139 | 649.97 | 11700 | 1.5758 | 0.7545 |
| 0.1299 | 666.65 | 12000 | 1.5559 | 0.7545 |
| 0.127 | 683.32 | 12300 | 1.5776 | 0.7596 |
| 0.1264 | 699.97 | 12600 | 1.5790 | 0.7519 |
| 0.1209 | 716.65 | 12900 | 1.5805 | 0.7545 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-assamese
|
infinitejoy
| 2022-03-24T11:53:47Z | 24 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning",
"as",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: as
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
- as
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: as
metrics:
- name: Test WER
type: wer
value: 72.64
- name: Test CER
type: cer
value: 27.35
---
# wav2vec2-large-xls-r-300m-assamese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_7_0 dataset.
It achieves the following results on the evaluation set:
- WER: 0.7954545454545454
- CER: 0.32341269841269843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
To compute the evaluation parameters
```bash
cd wav2vec2-large-xls-r-300m-assamese; python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 16
- eval_batch_size: 8
- seed: not given
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------: |
| 1.584065 | NA | 400 | 1.584065 | 0.915512 |
| 1.658865 | Na | 800 | 1.658865 | 0.805096 |
| 1.882352 | NA | 1200 | 1.882352 | 0.820742 |
| 1.881240 | NA | 1600 | 1.881240 | 0.810907 |
| 2.159748 | NA | 2000 | 2.159748 | 0.804202 |
| 1.992871 | NA | 2400 | 1.992871 | 0.803308 |
| 2.201436 | NA | 2800 | 2.201436 | 0.802861 |
| 2.165218 | NA | 3200 | 2.165218 | 0.793920 |
| 2.253643 | NA | 3600 | 2.253643 | 0.796603 |
| 2.265880 | NA | 4000 | 2.265880 | 0.790344 |
| 2.293935 | NA | 4400 | 2.293935 | 0.797050 |
| 2.288851 | NA | 4800 | 2.288851 | 0.784086 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
chmanoj/xls-r-300m-te
|
chmanoj
| 2022-03-24T11:53:34Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"openslr_SLR66",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"te",
"dataset:openslr",
"dataset:SLR66",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- te
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr_SLR66
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- openslr
- SLR66
metrics:
- wer
model-index:
- name: xls-r-300m-te
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR66
metrics:
- type: wer
value: 24.695121951219512
name: Test WER
- type: cer
value: 4.861934182322532
name: Test CER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2680
- Wer: 0.3467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0304 | 4.81 | 500 | 1.5676 | 1.0554 |
| 1.5263 | 9.61 | 1000 | 0.4693 | 0.8023 |
| 1.5299 | 14.42 | 1500 | 0.4368 | 0.7311 |
| 1.5063 | 19.23 | 2000 | 0.4360 | 0.7302 |
| 1.455 | 24.04 | 2500 | 0.4213 | 0.6692 |
| 1.4755 | 28.84 | 3000 | 0.4329 | 0.5943 |
| 1.352 | 33.65 | 3500 | 0.4074 | 0.5765 |
| 1.3122 | 38.46 | 4000 | 0.3866 | 0.5630 |
| 1.2799 | 43.27 | 4500 | 0.3860 | 0.5480 |
| 1.212 | 48.08 | 5000 | 0.3590 | 0.5317 |
| 1.1645 | 52.88 | 5500 | 0.3283 | 0.4757 |
| 1.0854 | 57.69 | 6000 | 0.3162 | 0.4687 |
| 1.0292 | 62.5 | 6500 | 0.3126 | 0.4416 |
| 0.9607 | 67.31 | 7000 | 0.2990 | 0.4066 |
| 0.9156 | 72.12 | 7500 | 0.2870 | 0.4009 |
| 0.8329 | 76.92 | 8000 | 0.2791 | 0.3909 |
| 0.7979 | 81.73 | 8500 | 0.2770 | 0.3670 |
| 0.7144 | 86.54 | 9000 | 0.2841 | 0.3661 |
| 0.6997 | 91.35 | 9500 | 0.2721 | 0.3485 |
| 0.6568 | 96.15 | 10000 | 0.2681 | 0.3437 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
RuudVelo/wav2vec2-large-xls-r-1b-nl
|
RuudVelo
| 2022-03-24T11:53:24Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-nl
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 11.12
- name: Test CER
type: cer
value: 3.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 31.92
- name: Test CER
type: cer
value: 13.87
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 32.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset. This model is also available with a language model which improves these results. This model can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl-lm. The Common Voice 8 Dutch test Wer is 9.73 of that model.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Wer: 0.1156
## Model description
Model fine-tuned using the wav2vec-als-r-1b model architecture
## Intended uses & limitations
More information needed
## Training and evaluation data
Model has been trained on Common Voice 8 Dutch
## Training procedure
### Training hyperparameters
Model parameters can be found under Files and versions in the run.sh file.
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2223 | 0.52 | 500 | 0.3866 | 0.3425 |
| 1.0748 | 1.03 | 1000 | 0.2574 | 0.2169 |
| 1.0416 | 1.55 | 1500 | 0.2177 | 0.1946 |
| 0.9951 | 2.06 | 2000 | 0.2008 | 0.1760 |
| 0.975 | 2.58 | 2500 | 0.1961 | 0.1751 |
| 0.9461 | 3.1 | 3000 | 0.1989 | 0.1782 |
| 0.9381 | 3.61 | 3500 | 0.1928 | 0.1699 |
| 0.934 | 4.13 | 4000 | 0.1923 | 0.1633 |
| 0.9322 | 4.64 | 4500 | 0.1871 | 0.1634 |
| 0.9012 | 5.16 | 5000 | 0.1890 | 0.1702 |
| 0.9045 | 5.68 | 5500 | 0.1882 | 0.1740 |
| 0.8826 | 6.19 | 6000 | 0.1856 | 0.1575 |
| 0.8848 | 6.71 | 6500 | 0.1861 | 0.1617 |
| 0.8723 | 7.22 | 7000 | 0.1927 | 0.1646 |
| 0.8725 | 7.74 | 7500 | 0.1798 | 0.1531 |
| 0.8573 | 8.26 | 8000 | 0.1781 | 0.1587 |
| 0.8633 | 8.77 | 8500 | 0.1852 | 0.1628 |
| 0.8603 | 9.29 | 9000 | 0.1833 | 0.1601 |
| 0.8421 | 9.8 | 9500 | 0.1788 | 0.1543 |
| 0.8404 | 10.32 | 10000 | 0.1844 | 0.1556 |
| 0.8342 | 10.84 | 10500 | 0.1770 | 0.1538 |
| 0.8161 | 11.35 | 11000 | 0.1821 | 0.1567 |
| 0.8371 | 11.87 | 11500 | 0.1909 | 0.1629 |
| 0.8083 | 12.38 | 12000 | 0.1778 | 0.1498 |
| 0.806 | 12.9 | 12500 | 0.1802 | 0.1547 |
| 0.8013 | 13.42 | 13000 | 0.1859 | 0.1584 |
| 0.7913 | 13.93 | 13500 | 0.1875 | 0.1517 |
| 0.8063 | 14.45 | 14000 | 0.1799 | 0.1571 |
| 0.7991 | 14.96 | 14500 | 0.1792 | 0.1538 |
| 0.7843 | 15.48 | 15000 | 0.1753 | 0.1464 |
| 0.7905 | 16.0 | 15500 | 0.1784 | 0.1508 |
| 0.7808 | 16.51 | 16000 | 0.1771 | 0.1485 |
| 0.7743 | 17.03 | 16500 | 0.1795 | 0.1491 |
| 0.7833 | 17.54 | 17000 | 0.1722 | 0.1484 |
| 0.7763 | 18.06 | 17500 | 0.1767 | 0.1518 |
| 0.7698 | 18.58 | 18000 | 0.1720 | 0.1460 |
| 0.7571 | 19.09 | 18500 | 0.1735 | 0.1478 |
| 0.7673 | 19.61 | 19000 | 0.1817 | 0.1511 |
| 0.7415 | 20.12 | 19500 | 0.1763 | 0.1481 |
| 0.751 | 20.64 | 20000 | 0.1742 | 0.1484 |
| 0.7563 | 21.16 | 20500 | 0.1810 | 0.1611 |
| 0.7423 | 21.67 | 21000 | 0.1817 | 0.1557 |
| 0.7242 | 22.19 | 21500 | 0.1690 | 0.1446 |
| 0.7251 | 22.7 | 22000 | 0.1684 | 0.1446 |
| 0.7302 | 23.22 | 22500 | 0.1735 | 0.1430 |
| 0.733 | 23.74 | 23000 | 0.1720 | 0.1454 |
| 0.7128 | 24.25 | 23500 | 0.1668 | 0.1383 |
| 0.7184 | 24.77 | 24000 | 0.1635 | 0.1377 |
| 0.7015 | 25.28 | 24500 | 0.1646 | 0.1389 |
| 0.7198 | 25.8 | 25000 | 0.1775 | 0.1462 |
| 0.7178 | 26.32 | 25500 | 0.1705 | 0.1419 |
| 0.7199 | 26.83 | 26000 | 0.1649 | 0.1416 |
| 0.6981 | 27.35 | 26500 | 0.1724 | 0.1418 |
| 0.6886 | 27.86 | 27000 | 0.1633 | 0.1382 |
| 0.6922 | 28.38 | 27500 | 0.1698 | 0.1420 |
| 0.6833 | 28.9 | 28000 | 0.1611 | 0.1351 |
| 0.6798 | 29.41 | 28500 | 0.1639 | 0.1365 |
| 0.6711 | 29.93 | 29000 | 0.1668 | 0.1358 |
| 0.6762 | 30.44 | 29500 | 0.1682 | 0.1355 |
| 0.6594 | 30.96 | 30000 | 0.1629 | 0.1345 |
| 0.6664 | 31.48 | 30500 | 0.1625 | 0.1321 |
| 0.6838 | 31.99 | 31000 | 0.1597 | 0.1372 |
| 0.6603 | 32.51 | 31500 | 0.1583 | 0.1302 |
| 0.6468 | 33.02 | 32000 | 0.1595 | 0.1322 |
| 0.6464 | 33.54 | 32500 | 0.1609 | 0.1315 |
| 0.6623 | 34.06 | 33000 | 0.1622 | 0.1366 |
| 0.6414 | 34.57 | 33500 | 0.1587 | 0.1330 |
| 0.6242 | 35.09 | 34000 | 0.1614 | 0.1337 |
| 0.632 | 35.6 | 34500 | 0.1568 | 0.1272 |
| 0.6346 | 36.12 | 35000 | 0.1583 | 0.1274 |
| 0.6143 | 36.64 | 35500 | 0.1576 | 0.1264 |
| 0.6208 | 37.15 | 36000 | 0.1621 | 0.1263 |
| 0.6185 | 37.67 | 36500 | 0.1623 | 0.1270 |
| 0.6128 | 38.18 | 37000 | 0.1604 | 0.1268 |
| 0.6151 | 38.7 | 37500 | 0.1593 | 0.1246 |
| 0.6082 | 39.22 | 38000 | 0.1532 | 0.1238 |
| 0.6 | 39.73 | 38500 | 0.1524 | 0.1224 |
| 0.6032 | 40.25 | 39000 | 0.1521 | 0.1212 |
| 0.6016 | 40.76 | 39500 | 0.1551 | 0.1215 |
| 0.6009 | 41.28 | 40000 | 0.1523 | 0.1215 |
| 0.5875 | 41.8 | 40500 | 0.1541 | 0.1216 |
| 0.608 | 42.31 | 41000 | 0.1536 | 0.1209 |
| 0.5876 | 42.83 | 41500 | 0.1567 | 0.1211 |
| 0.5714 | 43.34 | 42000 | 0.1532 | 0.1217 |
| 0.5756 | 43.86 | 42500 | 0.1516 | 0.1196 |
| 0.5719 | 44.38 | 43000 | 0.1491 | 0.1191 |
| 0.5829 | 44.89 | 43500 | 0.1497 | 0.1193 |
| 0.5664 | 45.41 | 44000 | 0.1487 | 0.1173 |
| 0.5707 | 45.92 | 44500 | 0.1470 | 0.1164 |
| 0.5696 | 46.44 | 45000 | 0.1479 | 0.1161 |
| 0.5767 | 46.96 | 45500 | 0.1492 | 0.1175 |
| 0.5573 | 47.47 | 46000 | 0.1471 | 0.1165 |
| 0.5625 | 47.99 | 46500 | 0.1484 | 0.1168 |
| 0.5671 | 48.5 | 47000 | 0.1474 | 0.1162 |
| 0.5484 | 49.02 | 47500 | 0.1479 | 0.1158 |
| 0.555 | 49.54 | 48000 | 0.1477 | 0.1157 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Iskaj/xlsr300m_cv_8.0_nl
|
Iskaj
| 2022-03-24T11:53:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"mozilla-foundation/common_voice_7_0",
"nl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- mozilla-foundation/common_voice_7_0
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dutch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 NL
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 46.94
- name: Test CER
type: cer
value: 21.65
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: ???
- name: Test CER
type: cer
value: ???
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 42.56
---
# xlsr300m_cv_8.0_nl
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Iskaj/xlsr300m_cv_8.0_nl --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_8.0_nl"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
inputs = processor(resampled_audio, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription[0].lower()
#'het kontine schip lag aangemeert in de aven'
```
|
DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1
|
DrishtiSharma
| 2022-03-24T11:52:59Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"pa-IN",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- pa-IN
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-pa-IN-dx1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 0.48725989807918463
- name: Test CER
type: cer
value: 0.1687305197540224
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pa-IN
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0855
- Wer: 0.4755
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Punjabi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4607 | 9.26 | 500 | 2.7746 | 1.0416 |
| 0.3442 | 18.52 | 1000 | 0.9114 | 0.5911 |
| 0.2213 | 27.78 | 1500 | 0.9687 | 0.5751 |
| 0.1242 | 37.04 | 2000 | 1.0204 | 0.5461 |
| 0.0998 | 46.3 | 2500 | 1.0250 | 0.5233 |
| 0.0727 | 55.56 | 3000 | 1.1072 | 0.5382 |
| 0.0605 | 64.81 | 3500 | 1.0588 | 0.5073 |
| 0.0458 | 74.07 | 4000 | 1.0818 | 0.5069 |
| 0.0338 | 83.33 | 4500 | 1.0948 | 0.5108 |
| 0.0223 | 92.59 | 5000 | 1.0986 | 0.4775 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-tatar
|
infinitejoy
| 2022-03-24T11:52:33Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"tt",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- tt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Tatar
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tt
metrics:
- name: Test WER
type: wer
value: 24.392
- name: Test CER
type: cer
value: 5.024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tatar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Wer: 0.2454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.173 | 9.66 | 4000 | 0.2920 | 0.3608 |
| 0.9433 | 19.32 | 8000 | 0.2336 | 0.3026 |
| 0.8552 | 28.99 | 12000 | 0.2221 | 0.2799 |
| 0.7863 | 38.65 | 16000 | 0.1953 | 0.2479 |
| 0.7365 | 48.31 | 20000 | 0.1968 | 0.2449 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-kyrgyz
|
infinitejoy
| 2022-03-24T11:52:31Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ky",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ky
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ky
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Kyrgyz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ky
metrics:
- name: Test WER
type: wer
value: 40.908
- name: Test CER
type: cer
value: 10.999
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kyrgyz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KY dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5817
- Wer: 0.4096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5412 | 18.69 | 2000 | 0.6161 | 0.5747 |
| 1.311 | 37.38 | 4000 | 0.5707 | 0.5070 |
| 1.1367 | 56.07 | 6000 | 0.5372 | 0.4664 |
| 0.9696 | 74.77 | 8000 | 0.5443 | 0.4328 |
| 0.8163 | 93.46 | 10000 | 0.5916 | 0.4124 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
glob-asr/wav2vec2-large-xls-r-300m-guarani-small
|
glob-asr
| 2022-03-24T11:52:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"gn",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gn
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- gn
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-guarani-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4964
- Wer: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.65 | 100 | 1.1326 | 1.0 |
| 1.6569 | 13.32 | 200 | 0.5264 | 0.6478 |
| 1.6569 | 19.97 | 300 | 0.5370 | 0.6261 |
| 0.2293 | 26.65 | 400 | 0.4964 | 0.5957 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
comodoro/wav2vec2-xls-r-300m-cs-cv8
|
comodoro
| 2022-03-24T11:52:03Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- xlsr-fine-tuning-week
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- name: Test WER
type: wer
value: 10.3
- name: Test CER
type: cer
value: 2.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 54.29
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 44.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cs-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset.
It achieves the following results on the evaluation set while training:
- Loss: 0.2327
- Wer: 0.1608
- Cer: 0.0376
The `eval.py` script results using a LM are:
WER: 0.10281503199350225
CER: 0.02622802241689026
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-cv8")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training
## Training procedure
### Training hyperparameters
The following hyperparameters were used during first stage of training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
The following hyperparameters were used during second stage of training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 640
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 7.2926 | 8.06 | 250 | 3.8497 | 1.0 | 1.0 |
| 3.417 | 16.13 | 500 | 3.2852 | 1.0 | 0.9857 |
| 2.0264 | 24.19 | 750 | 0.7099 | 0.7342 | 0.1768 |
| 0.4018 | 32.25 | 1000 | 0.6188 | 0.6415 | 0.1551 |
| 0.2444 | 40.32 | 1250 | 0.6632 | 0.6362 | 0.1600 |
| 0.1882 | 48.38 | 1500 | 0.6070 | 0.5783 | 0.1388 |
| 0.153 | 56.44 | 1750 | 0.6425 | 0.5720 | 0.1377 |
| 0.1214 | 64.51 | 2000 | 0.6363 | 0.5546 | 0.1337 |
| 0.1011 | 72.57 | 2250 | 0.6310 | 0.5222 | 0.1224 |
| 0.0879 | 80.63 | 2500 | 0.6353 | 0.5258 | 0.1253 |
| 0.0782 | 88.7 | 2750 | 0.6078 | 0.4904 | 0.1127 |
| 0.0709 | 96.76 | 3000 | 0.6465 | 0.4960 | 0.1154 |
| 0.0661 | 104.82 | 3250 | 0.6622 | 0.4945 | 0.1166 |
| 0.0616 | 112.89 | 3500 | 0.6440 | 0.4786 | 0.1104 |
| 0.0579 | 120.95 | 3750 | 0.6815 | 0.4887 | 0.1144 |
| 0.0549 | 129.03 | 4000 | 0.6603 | 0.4780 | 0.1105 |
| 0.0527 | 137.09 | 4250 | 0.6652 | 0.4749 | 0.1090 |
| 0.0506 | 145.16 | 4500 | 0.6958 | 0.4846 | 0.1133 |
Further fine-tuning with slightly different architecture and higher learning rate:
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.576 | 8.06 | 250 | 0.2411 | 0.2340 | 0.0502 |
| 0.2564 | 16.13 | 500 | 0.2305 | 0.2097 | 0.0492 |
| 0.2018 | 24.19 | 750 | 0.2371 | 0.2059 | 0.0494 |
| 0.1549 | 32.25 | 1000 | 0.2298 | 0.1844 | 0.0435 |
| 0.1224 | 40.32 | 1250 | 0.2288 | 0.1725 | 0.0407 |
| 0.1004 | 48.38 | 1500 | 0.2327 | 0.1608 | 0.0376 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
arampacha/wav2vec2-xls-r-1b-ka
|
arampacha
| 2022-03-24T11:51:59Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ka
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-1b-ka
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice ka
args: ka
metrics:
- type: wer
value: 7.39778066580026
name: WER LM
- type: cer
value: 1.1882089427096434
name: CER LM
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ka
metrics:
- name: Test WER
type: wer
value: 22.61
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ka
metrics:
- name: Test WER
type: wer
value: 21.58
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ka
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/KA/NOIZY_STUDENT_2/ - KA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Wer: 0.1527
- Cer: 0.0221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2839 | 6.45 | 400 | 0.2229 | 0.3609 | 0.0557 |
| 0.9775 | 12.9 | 800 | 0.1271 | 0.2202 | 0.0317 |
| 0.9045 | 19.35 | 1200 | 0.1268 | 0.2030 | 0.0294 |
| 0.8652 | 25.8 | 1600 | 0.1211 | 0.1940 | 0.0287 |
| 0.8505 | 32.26 | 2000 | 0.1192 | 0.1912 | 0.0276 |
| 0.8168 | 38.7 | 2400 | 0.1086 | 0.1763 | 0.0260 |
| 0.7737 | 45.16 | 2800 | 0.1098 | 0.1753 | 0.0256 |
| 0.744 | 51.61 | 3200 | 0.1054 | 0.1646 | 0.0239 |
| 0.7114 | 58.06 | 3600 | 0.1034 | 0.1573 | 0.0228 |
| 0.6773 | 64.51 | 4000 | 0.1022 | 0.1527 | 0.0221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
lsb/wav2vec2-base-it-latin
|
lsb
| 2022-03-24T11:51:21Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"la",
"dataset:lsb/poetaexmachina-mp3-recitations",
"license:agpl-3.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- la
license: agpl-3.0
tags:
- robust-speech-event
- hf-asr-leaderboard
datasets:
- lsb/poetaexmachina-mp3-recitations
metrics:
- wer
model-index:
- name: wav2vec2-base-it-latin
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: lsb/poetaexmachina-mp3-recitations
name: Poeta Ex Machina mp3 recitations
metrics:
- type: wer
value: 0.398
name: Test WER
---
---
# wav2vec2-base-it-latin
This model is a fine-tuned version of [wav2vec2-base-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-it-voxpopuli)
The dataset used is the [poetaexmachina-mp3-recitations](https://github.com/lsb/poetaexmachina-mp3-recitations),
all of the 2-series texts (vergil) and every tenth 1-series text (words from Poeta Ex Machina's [database](https://github.com/lsb/poetaexmachina/blob/master/merged-scansions.db) of words with scansions).
It achieves the following [results](https://github.com/lsb/tironiculum/blame/trunk/wav2vec2%20base%20it%20latin.ipynb#L1234) on the evaluation set:
- Loss: 0.1943
- WER: 0.398
|
infinitejoy/wav2vec2-large-xls-r-300m-bulgarian
|
infinitejoy
| 2022-03-24T11:47:30Z | 445 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"bg",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- bg
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: bg
metrics:
- name: Test WER
type: wer
value: 46.68
- name: Test CER
type: cer
value: 10.75
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 63.68
- name: Test CER
type: cer
value: 19.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 64.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4487
- Wer: 0.4674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9774 | 6.33 | 500 | 2.9769 | 1.0 |
| 1.3453 | 12.66 | 1000 | 0.6523 | 0.6980 |
| 1.1658 | 18.99 | 1500 | 0.5636 | 0.6359 |
| 1.0797 | 25.32 | 2000 | 0.5004 | 0.5759 |
| 1.044 | 31.65 | 2500 | 0.4958 | 0.5569 |
| 0.9915 | 37.97 | 3000 | 0.4971 | 0.5350 |
| 0.9429 | 44.3 | 3500 | 0.4829 | 0.5229 |
| 0.9266 | 50.63 | 4000 | 0.4515 | 0.5074 |
| 0.8965 | 56.96 | 4500 | 0.4599 | 0.5039 |
| 0.878 | 63.29 | 5000 | 0.4735 | 0.4954 |
| 0.8494 | 69.62 | 5500 | 0.4460 | 0.4878 |
| 0.8343 | 75.95 | 6000 | 0.4510 | 0.4795 |
| 0.8236 | 82.28 | 6500 | 0.4538 | 0.4789 |
| 0.8069 | 88.61 | 7000 | 0.4526 | 0.4748 |
| 0.7958 | 94.94 | 7500 | 0.4496 | 0.4700 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
buvnswrn/daml-t5-pretrain-imdb-accelerate
|
buvnswrn
| 2022-03-24T11:22:52Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-24T11:06:02Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- imdb
model-index:
- name: daml-t5-pretrain-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daml-t5-pretrain-imdb
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
joe5campbell/Horovod_Tweet_Sentiment_1k_5eps
|
joe5campbell
| 2022-03-24T11:01:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T11:01:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Horovod_Tweet_Sentiment_1k_5eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Horovod_Tweet_Sentiment_1k_5eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5216092
- Train Accuracy: 0.784375
- Validation Loss: 0.92405033
- Validation Accuracy: 0.4875
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7129049 | 0.50937504 | 0.7314203 | 0.490625 | 0 |
| 0.73165804 | 0.47343752 | 0.6929074 | 0.484375 | 1 |
| 0.6827939 | 0.55 | 0.6864271 | 0.50625 | 2 |
| 0.66076773 | 0.5578125 | 0.60817575 | 0.69687504 | 3 |
| 0.5216092 | 0.784375 | 0.92405033 | 0.4875 | 4 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
huggingtweets/vi0linheart
|
huggingtweets
| 2022-03-24T10:11:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T10:09:41Z |
---
language: en
thumbnail: http://www.huggingtweets.com/vi0linheart/1648116634962/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500859213622300673/izXwf0KK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sal</div>
<div style="text-align: center; font-size: 14px;">@vi0linheart</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sal.
| Data | sal |
| --- | --- |
| Tweets downloaded | 3114 |
| Retweets | 421 |
| Short tweets | 541 |
| Tweets kept | 2152 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21y9qo98/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vi0linheart's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3t019c6m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3t019c6m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vi0linheart')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/kytalli-vi0linheart
|
huggingtweets
| 2022-03-24T09:38:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T09:25:29Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kytalli-vi0linheart/1648114676311/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500859213622300673/izXwf0KK_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376749372831002627/2B9FZTnI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sal & G</div>
<div style="text-align: center; font-size: 14px;">@kytalli-vi0linheart</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sal & G.
| Data | sal | G |
| --- | --- | --- |
| Tweets downloaded | 3114 | 3249 |
| Retweets | 421 | 55 |
| Short tweets | 541 | 226 |
| Tweets kept | 2152 | 2968 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tj76wad/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kytalli-vi0linheart's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a1bludi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a1bludi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kytalli-vi0linheart')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
niksmer/RoBERTa-RILE
|
niksmer
| 2022-03-24T09:19:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: RoBERTa-RILE
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# RoBERTa-RILE
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of three political categories: "neutral", "left", "right".
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/RoBERTa-RILE")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Training and evaluation data
## Training and evaluation data
RoBERTa-RILE was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The Manifesto Project mannually annotates individual sentences from political party manifestos in over 50 main categories - see the [codebook](https://manifesto-project.wzb.eu/down/papers/handbook_2021_version_5.pdf) for the exact definitions of each categorie. It has created a valid left-right-scale, the rile-index, to aaggregate manifesto in a standardized, onde-dimensional political space from left to right based on saliency-theory.
RoBERTa-RILE classifies texts based on the rile index.
### Tain data
Train data was slightly imbalanced.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 52,277 |
| 1 | left | 37,106 |
| 2 | right | 26,560 |
Overall count: 115,943
### Validation data
The validation was created by chance.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 9,198 |
| 1 | left | 6,637 |
| 2 | right | 4,626 |
Overall count: 20,461
### Test data
The test dataset contains ten canadian manifestos between 2004 and 2008.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 3,881 |
| 1 | left | 2,611 |
| 2 | right | 1,838 |
Overall count: 8,330
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=1e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 0.7442 | 1.0 | 1812 | 0.6827 | 0.7120 | 0.7120 | 0.7007 | 0.7126 | 0.7120 | 0.7120 |
| 0.6447 | 2.0 | 3624 | 0.6618 | 0.7281 | 0.7281 | 0.7169 | 0.7281 | 0.7281 | 0.7281 |
| 0.5467 | 3.0 | 5436 | 0.6657 | 0.7309 | 0.7309 | 0.7176 | 0.7295 | 0.7309 | 0.7309 |
| 0.5179 | 4.0 | 7248 | 0.6654 | 0.7346 | 0.7346 | 0.7240 | 0.7345 | 0.7346 | 0.7346 |
| 0.4787 | 5.0 | 9060 | 0.6757 | 0.7350 | 0.7350 | 0.7241 | 0.7347 | 0.7350 | 0.7350 |
### Validation evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.74 | 0.72 | 0.73 |
### Test evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.69 | 0.67 | 0.69 |
### Evaluation per category
| Label | Validation F1-Score | Test F1-Score |
|-----------------------------|---------------------|---------------|
| neutral | 0.77 | 0.74 |
| left | 0.73 | 0.65 |
| right | 0.67 | 0.62 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [ManiBERT](https://huggingface.co/niksmer/ManiBERT).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
niksmer/ManiBERT
|
niksmer
| 2022-03-24T09:03:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: ManiBERT
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# ManiBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of 56 political categories:
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/ManiBERT")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Train Data
ManiBERT was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The resulting Datasets are higly (!) imbalanced. See Evaluation.
## Evaluation
| Description | Label | Count Train Data | Count Validation Data | Count Test Data | Validation F1-Score | Test F1-Score |
|-------------------------------------------------------------------|-------|------------------|-----------------------|-----------------|---------------------|---------------|
| Foreign Special Relationships: Positive | 0 | 545 | 96 | 60 | 0.43 | 0.45 |
| Foreign Special Relationships: Negative | 1 | 66 | 14 | 22 | 0.22 | 0.09 |
| Anti-Imperialism | 2 | 93 | 16 | 1 | 0.16 | 0.00 |
| Military: Positive | 3 | 1,969 | 356 | 159 | 0.69 | 0.63 |
| Military: Negative | 4 | 489 | 89 | 52 | 0.59 | 0.63 |
| Peace | 5 | 418 | 80 | 49 | 0.57 | 0.64 |
| Internationalism: Positive | 6 | 2,401 | 417 | 404 | 0.60 | 0.54 |
| European Community/Union or Latin America Integration: Positive | 7 | 930 | 156 | 20 | 0.58 | 0.32 |
| Internationalism: Negative | 8 | 209 | 40 | 57 | 0.28 | 0.05 |
| European Community/Union or Latin America Integration: Negative | 9 | 520 | 81 | 0 | 0.39 | - |
| Freedom and Human Rights | 10 | 2,196 | 389 | 76 | 0.50 | 0.34 |
| Democracy | 11 | 3,045 | 534 | 206 | 0.53 | 0.51 |
| Constitutionalism: Positive | 12 | 259 | 48 | 12 | 0.34 | 0.22 |
| Constitutionalism: Negative | 13 | 380 | 72 | 2 | 0.34 | 0.00 |
| Decentralisation: Positive | 14 | 2,791 | 481 | 331 | 0.49 | 0.45 |
| Centralisation: Positive | 15 | 150 | 33 | 71 | 0.11 | 0.00 |
| Governmental and Administrative Efficiency | 16 | 3,905 | 711 | 105 | 0.50 | 0.32 |
| Political Corruption | 17 | 900 | 186 | 234 | 0.59 | 0.55 |
| Political Authority | 18 | 3,488 | 627 | 300 | 0.51 | 0.39 |
| Free Market Economy | 19 | 1,768 | 309 | 53 | 0.40 | 0.16 |
| Incentives: Positive | 20 | 3,100 | 544 | 81 | 0.52 | 0.28 |
| Market Regulation | 21 | 3,562 | 616 | 210 | 0.50 | 0.36 |
| Economic Planning | 22 | 533 | 93 | 67 | 0.31 | 0.12 |
| Corporatism/ Mixed Economy | 23 | 193 | 32 | 23 | 0.28 | 0.33 |
| Protectionism: Positive | 24 | 633 | 103 | 180 | 0.44 | 0.22 |
| Protectionism: Negative | 25 | 723 | 118 | 149 | 0.52 | 0.40 |
| Economic Goals | 26 | 817 | 139 | 148 | 0.05 | 0.00 |
| Keynesian Demand Management | 27 | 160 | 25 | 9 | 0.00 | 0.00 |
| Economic Growth: Positive | 28 | 3,142 | 607 | 374 | 0.53 | 0.30 |
| Technology and Infrastructure: Positive | 29 | 8,643 | 1,529 | 339 | 0.71 | 0.56 |
| Controlled Economy | 30 | 567 | 96 | 94 | 0.47 | 0.16 |
| Nationalisation | 31 | 832 | 157 | 27 | 0.56 | 0.16 |
| Economic Orthodoxy | 32 | 1,721 | 287 | 184 | 0.55 | 0.48 |
| Marxist Analysis: Positive | 33 | 148 | 33 | 0 | 0.20 | - |
| Anti-Growth Economy and Sustainability | 34 | 2,676 | 452 | 250 | 0.43 | 0.33 |
| Environmental Protection | 35 | 6,731 | 1,163 | 934 | 0.70 | 0.67 |
| Culture: Positive | 36 | 2,082 | 358 | 92 | 0.69 | 0.56 |
| Equality: Positive | 37 | 6,630 | 1,126 | 361 | 0.57 | 0.43 |
| Welfare State Expansion | 38 | 13,486 | 2,405 | 990 | 0.72 | 0.61 |
| Welfare State Limitation | 39 | 926 | 151 | 2 | 0.45 | 0.00 |
| Education Expansion | 40 | 7,191 | 1,324 | 274 | 0.78 | 0.63 |
| Education Limitation | 41 | 154 | 27 | 1 | 0.17 | 0.00 |
| National Way of Life: Positive | 42 | 2,105 | 385 | 395 | 0.48 | 0.34 |
| National Way of Life: Negative | 43 | 743 | 147 | 2 | 0.27 | 0.00 |
| Traditional Morality: Positive | 44 | 1,375 | 234 | 19 | 0.55 | 0.14 |
| Traditional Morality: Negative | 45 | 291 | 54 | 38 | 0.30 | 0.23 |
| Law and Order | 46 | 5,582 | 949 | 381 | 0.72 | 0.71 |
| Civic Mindedness: Positive | 47 | 1,348 | 229 | 27 | 0.45 | 0.28 |
| Multiculturalism: Positive | 48 | 2,006 | 355 | 71 | 0.61 | 0.35 |
| Multiculturalism: Negative | 49 | 144 | 31 | 7 | 0.33 | 0.00 |
| Labour Groups: Positive | 50 | 3,856 | 707 | 57 | 0.64 | 0.14 |
| Labour Groups: Negative | 51 | 208 | 35 | 0 | 0.44 | - |
| Agriculture and Farmers | 52 | 2,996 | 490 | 130 | 0.67 | 0.56 |
| Middle Class and Professional Groups | 53 | 271 | 38 | 12 | 0.38 | 0.40 |
| Underprivileged Minority Groups | 54 | 1,417 | 252 | 82 | 0.34 | 0.33 |
| Non-economic Demographic Groups | 55 | 2,429 | 435 | 106 | 0.42 | 0.24 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=5e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
overwrite_output_dir=True,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 1.7638 | 1.0 | 1812 | 1.6471 | 0.5531 | 0.5531 | 0.3354 | 0.5368 | 0.5531 | 0.5531 |
| 1.4501 | 2.0 | 3624 | 1.5167 | 0.5807 | 0.5807 | 0.3921 | 0.5655 | 0.5807 | 0.5807 |
| 1.0638 | 3.0 | 5436 | 1.5017 | 0.5893 | 0.5893 | 0.4240 | 0.5789 | 0.5893 | 0.5893 |
| 0.9263 | 4.0 | 7248 | 1.5173 | 0.5975 | 0.5975 | 0.4499 | 0.5901 | 0.5975 | 0.5975 |
| 0.7859 | 5.0 | 9060 | 1.5574 | 0.5978 | 0.5978 | 0.4564 | 0.5903 | 0.5978 | 0.5978 |
### Overall evaluation
| Type | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| Validation | 0.60 | 0.46 | 0.59 |
| Test | 0.48 | 0.30 | 0.47 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [RoBERTa-RILE](https://huggingface.co/niksmer/RoBERTa-RILE).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
huggingtweets/tariqnasheed
|
huggingtweets
| 2022-03-24T08:54:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T08:47:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tariqnasheed/1648112086220/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506809010988539910/bBCRvJ4K_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tariq Nasheed 🇺🇸</div>
<div style="text-align: center; font-size: 14px;">@tariqnasheed</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tariq Nasheed 🇺🇸.
| Data | Tariq Nasheed 🇺🇸 |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 273 |
| Short tweets | 396 |
| Tweets kept | 2566 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f1jq7tem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tariqnasheed's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dn7iubq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dn7iubq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tariqnasheed')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
enimai/mt5-mustc-fr
|
enimai
| 2022-03-24T07:30:36Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-24T06:59:25Z |
---
license: apache-2.0
---
|
tiennvcs/distilbert-base-uncased-finetuned-ner
|
tiennvcs
| 2022-03-24T07:29:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-24T07:17:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9264836138175376
- name: Recall
type: recall
value: 0.9361226087929299
- name: F1
type: f1
value: 0.9312781703856213
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9265
- Recall: 0.9361
- F1: 0.9313
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2437 | 1.0 | 878 | 0.0745 | 0.9144 | 0.9173 | 0.9158 | 0.9799 |
| 0.0518 | 2.0 | 1756 | 0.0621 | 0.9177 | 0.9353 | 0.9264 | 0.9826 |
| 0.03 | 3.0 | 2634 | 0.0616 | 0.9265 | 0.9361 | 0.9313 | 0.9837 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nguyenvulebinh/iwslt-asr-wav2vec-large-4500h
|
nguyenvulebinh
| 2022-03-24T07:12:52Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"dataset:common_voice",
"dataset:librispeech_asr",
"dataset:how2",
"dataset:must-c-v1",
"dataset:must-c-v2",
"dataset:europarl",
"dataset:tedlium",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T14:53:55Z |
---
language: en
datasets:
- common_voice
- librispeech_asr
- how2
- must-c-v1
- must-c-v2
- europarl
- tedlium
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---
# Fine-Tune Wav2Vec2 large model for English ASR
### Data for fine-tune
| Dataset | Duration in hours |
|--------------|-------------------|
| Common Voice | 1667 |
| Europarl | 85 |
| How2 | 356 |
| Librispeech | 936 |
| MuST-C v1 | 407 |
| MuST-C v2 | 482 |
| Tedlium | 482 |
### Evaluation result
| Dataset | Duration in hours | WER w/o LM | WER with LM |
|-------------|-------------------|------------|-------------|
| Librispeech | 5.4 | 2.9 | 1.1 |
| Tedlium | 2.6 | 7.9 | 5.4 |
### Usage
[](https://colab.research.google.com/drive/1FAhtGvjRdHT4W0KeMdMMlL7sm6Hbe7dv?usp=sharing)
```python
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/iwslt-asr-wav2vec-large-4500h"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="tst_2010_sample.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# and of course there's teams that have a lot more tada structures and among the best are recent graduates of kindergarten
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
# and of course there are teams that have a lot more ta da structures and among the best are recent graduates of kindergarten
```
### Model Parameters License
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
nguyenvulebinh@gmail.com
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
simonnedved/codet5-base
|
simonnedved
| 2022-03-24T06:57:59Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dis2py",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T22:11:24Z |
---
license: apache-2.0
tags:
- dis2py
- generated_from_trainer
model-index:
- name: codet5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-base
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/codeparrot-ds-sample
|
Pavithra
| 2022-03-24T06:41:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T05:12:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5219
- eval_runtime: 603.3856
- eval_samples_per_second: 154.402
- eval_steps_per_second: 4.826
- epoch: 0.15
- step: 10000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
quincyqiang/chinese-roberta-wwm-ext
|
quincyqiang
| 2022-03-24T04:58:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-24T04:52:35Z |
---
license: apache-2.0
---
|
Yaxin/xlm-roberta-base-yelp-mlm
|
Yaxin
| 2022-03-24T04:44:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-24T04:10:58Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-yelp-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.7356223359340127
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-yelp-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1743
- Accuracy: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FuriouslyAsleep/unhappyZebra100
|
FuriouslyAsleep
| 2022-03-24T04:39:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:FuriouslyAsleep/autotrain-data-techDataClassifeier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T04:38:22Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- FuriouslyAsleep/autotrain-data-techDataClassifeier
co2_eq_emissions: 0.6969569001670619
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 664919631
- CO2 Emissions (in grams): 0.6969569001670619
## Validation Metrics
- Loss: 0.022509008646011353
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autotrain-techDataClassifeier-664919631
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
lazyturtl/digital
|
lazyturtl
| 2022-03-24T04:28:50Z | 68 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-15T00:21:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: digital
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8974359035491943
---
# digital
## Example Images
#### ansys

#### blender

#### roblox

#### sketchup

|
clisi2000/distilbert-base-uncased-distilled-clinc
|
clisi2000
| 2022-03-24T03:50:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T03:43:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
|
rurupang/roberta-base-finetuned-sts
|
rurupang
| 2022-03-24T01:54:26Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T14:13:32Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: roberta-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.956039443806831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sts
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Pearsonr: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 329 | 0.2462 | 0.9478 |
| 1.2505 | 2.0 | 658 | 0.1671 | 0.9530 |
| 1.2505 | 3.0 | 987 | 0.1890 | 0.9525 |
| 0.133 | 4.0 | 1316 | 0.2360 | 0.9548 |
| 0.0886 | 5.0 | 1645 | 0.2265 | 0.9528 |
| 0.0886 | 6.0 | 1974 | 0.2097 | 0.9518 |
| 0.0687 | 7.0 | 2303 | 0.2281 | 0.9523 |
| 0.0539 | 8.0 | 2632 | 0.2212 | 0.9542 |
| 0.0539 | 9.0 | 2961 | 0.1843 | 0.9532 |
| 0.045 | 10.0 | 3290 | 0.1999 | 0.9560 |
| 0.0378 | 11.0 | 3619 | 0.2357 | 0.9533 |
| 0.0378 | 12.0 | 3948 | 0.2134 | 0.9541 |
| 0.033 | 13.0 | 4277 | 0.2273 | 0.9540 |
| 0.03 | 14.0 | 4606 | 0.2148 | 0.9533 |
| 0.03 | 15.0 | 4935 | 0.2207 | 0.9534 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
negfir/distilbert-base-uncased-finetuned-squad
|
negfir
| 2022-03-24T01:39:12Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2789 | 1.0 | 5533 | 1.2200 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/btohtoh
|
huggingtweets
| 2022-03-24T01:35:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T01:35:48Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506402743296020484/X79Yfcx5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BToh</div>
<div style="text-align: center; font-size: 14px;">@btohtoh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BToh.
| Data | BToh |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 347 |
| Short tweets | 480 |
| Tweets kept | 2414 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xnk5832/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @btohtoh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/btohtoh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/chai_microsoft_indian_langs_te
|
espnet
| 2022-03-24T00:36:45Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"te",
"dataset:microsoft_indian_languages_interspeech2018",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T23:36:26Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: te
datasets:
- microsoft_indian_languages_interspeech2018
license: cc-by-4.0
---
## ESPnet2 model
### ``
This model was trained by Chaitanya Narisetty using recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/ms_indic_is18/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/chai_microsoft_indian_langs_te
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Mar 22 13:38:24 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1+cu111`
- Git hash: `f91410f712d1287cd6809c5bf26b54c5a40fe314`
- Commit date: `Mon Mar 14 22:32:17 2022 -0400`
## asr_train_asr_xlsr53_conformer_raw_te_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|28413|78.0|19.5|2.5|2.4|24.4|80.1|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.best_asr_model_valid.acc.ave/test_te|3040|28413|78.0|19.4|2.6|2.4|24.4|79.7|
|decode_transformer5_lm_lm_train_lm_transformer_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|28413|78.0|19.5|2.6|2.5|24.5|79.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|229419|95.6|2.2|2.2|1.6|6.1|80.1|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.best_asr_model_valid.acc.ave/test_te|3040|229419|95.6|2.2|2.2|1.6|6.0|79.7|
|decode_transformer5_lm_lm_train_lm_transformer_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|229419|95.6|2.1|2.2|1.6|6.0|79.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|146657|92.7|4.7|2.6|1.6|8.9|80.1|
|decode_transformer5_lm_lm_train_lm_rnn_te_bpe150_valid.loss.best_asr_model_valid.acc.ave/test_te|3040|146657|92.8|4.7|2.6|1.6|8.9|79.7|
|decode_transformer5_lm_lm_train_lm_transformer_te_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_te|3040|146657|92.8|4.6|2.6|1.6|8.9|79.9|
## config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_xlsr53_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_xlsr53_conformer_raw_te_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_te_bpe150_sp_ssl/train/speech_shape
- exp/asr_stats_raw_te_bpe150_sp_ssl/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_te_bpe150_sp_ssl/valid/speech_shape
- exp/asr_stats_raw_te_bpe150_sp_ssl/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_te_sp/wav.scp
- speech
- sound
- - dump/raw/train_te_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_te/wav.scp
- speech
- sound
- - dump/raw/dev_te/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- ా
- ు
- ి
- ం
- ే
- వ
- న
- ల
- ▁అ
- క
- ్
- ో
- మ
- ▁
- త
- ర
- ప
- ీ
- ▁మ
- య
- డ
- ▁ప
- ద
- ని
- గ
- ▁వ
- స
- కు
- ె
- ర్
- ▁స
- ▁క
- ్య
- న్న
- ట
- ▁చ
- ▁త
- ాల
- ంట
- ూ
- శ
- ంద
- ార
- ▁న
- ారు
- ▁ఉ
- లు
- ▁ఆ
- ను
- జ
- రి
- ▁ప్ర
- ించ
- ధ
- ై
- హ
- ంది
- ్ర
- ▁ఇ
- చ
- రు
- స్త
- లో
- ▁ద
- డు
- ▁ఎ
- ▁వి
- ల్ల
- ణ
- గా
- ది
- డి
- న్నారు
- దు
- ిన
- ▁ర
- త్
- ొ
- ▁గ
- ంత
- ంగా
- ▁కా
- బ
- ▁జ
- ష
- ▁తెల
- ులు
- ▁ఏ
- ట్ట
- చ్చ
- తి
- నే
- కి
- ంలో
- ▁అవును
- ▁చెప్ప
- భ
- ▁ఈ
- ప్ప
- ▁ని
- ▁రా
- క్క
- ▁బ
- ట్ల
- ▁భ
- తో
- ▁కూడా
- ▁బా
- ద్ద
- ▁చేస
- ▁లే
- ాయి
- ానికి
- త్ర
- ▁కొ
- ఖ
- ▁ఒక
- ▁చాలా
- క్ష
- ళ
- ▁చేస్త
- ృ
- థ
- ఘ
- ఫ
- ఓ
- ౌ
- ఒ
- ఐ
- ఠ
- ఢ
- అ
- ఉ
- ఏ
- ఈ
- ౦
- ఇ
- ః
- ఋ
- ఝ
- ఔ
- ఛ
- ఞ
- ఊ
- ఎ
- ఆ
- ఙ
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/te_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: fused
frontend_conf:
frontends:
- frontend_type: default
n_fft: 512
win_length: 400
hop_length: 160
- frontend_type: s3prl
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
align_method: linear_projection
proj_dim: 200
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 400
output_size: 100
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
public-data/dlib_face_landmark_model
|
public-data
| 2022-03-23T22:54:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-23T22:52:02Z |
# dlib face landmark model
- http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
|
ydshieh/roberta-base-squad2
|
ydshieh
| 2022-03-23T22:39:25Z | 57 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T22:29:51Z |
---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# roberta-base for QA
NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5. For exmaple:
```
model_name = "deepset/roberta-base-squad2"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
radev/xlm-roberta-base-finetuned-panx-de
|
radev
| 2022-03-23T22:27:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T22:11:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8593216480764853
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1807 | 0.8065 |
| 0.2218 | 2.0 | 526 | 0.1365 | 0.8485 |
| 0.2218 | 3.0 | 789 | 0.1345 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/radagasttbrown
|
huggingtweets
| 2022-03-23T21:33:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T21:13:19Z |
---
language: en
thumbnail: http://www.huggingtweets.com/radagasttbrown/1648071147429/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362404255798280192/yIKMf5AN_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radagast 🌋</div>
<div style="text-align: center; font-size: 14px;">@radagasttbrown</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Radagast 🌋.
| Data | Radagast 🌋 |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 457 |
| Short tweets | 230 |
| Tweets kept | 2541 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1b1t67ko/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @radagasttbrown's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/radagasttbrown')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/InformalToFormalLincoln30
|
BigSalmon
| 2022-03-23T20:51:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T20:36:45Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln30")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln30")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
|
bigmorning/my-gpt-model-4
|
bigmorning
| 2022-03-23T20:00:04Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T19:52:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model-4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model-4
This model is a fine-tuned version of [bigmorning/my-gpt-model-3](https://huggingface.co/bigmorning/my-gpt-model-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.0556
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.0556 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/ryiacy
|
huggingtweets
| 2022-03-23T19:51:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T19:28:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ryiacy/1648065062687/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424813722011410434/73S-oYNT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cyriac</div>
<div style="text-align: center; font-size: 14px;">@ryiacy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cyriac.
| Data | cyriac |
| --- | --- |
| Tweets downloaded | 1050 |
| Retweets | 32 |
| Short tweets | 60 |
| Tweets kept | 958 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26de85bt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ryiacy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2p7goxic) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2p7goxic/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ryiacy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/MASKGPT2
|
BigSalmon
| 2022-03-23T19:26:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T19:20:45Z |
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
|
negfir/uncased_L-12_H-128_A-2
|
negfir
| 2022-03-23T19:18:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2022-03-23T18:49:57Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: uncased_L-12_H-128_A-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# uncased_L-12_H-128_A-2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/eigenrobot-moridinamael
|
huggingtweets
| 2022-03-23T18:42:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T18:37:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/eigenrobot-moridinamael/1648060937936/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/615582548010229761/0zg9awKn_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492994204758278144/rDnqNReU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Twisted Mentat Matt & eigenrobot</div>
<div style="text-align: center; font-size: 14px;">@eigenrobot-moridinamael</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Twisted Mentat Matt & eigenrobot.
| Data | Twisted Mentat Matt | eigenrobot |
| --- | --- | --- |
| Tweets downloaded | 3145 | 3247 |
| Retweets | 1670 | 119 |
| Short tweets | 230 | 651 |
| Tweets kept | 1245 | 2477 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3njfftkj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eigenrobot-moridinamael's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nbxxa8l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nbxxa8l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eigenrobot-moridinamael')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ScandinavianMrT/gpt2_ONION_prefinetune_4.0
|
ScandinavianMrT
| 2022-03-23T18:39:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T18:34:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_ONION_prefinetune_4.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_ONION_prefinetune_4.0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 153 | 4.7368 |
| No log | 2.0 | 306 | 4.6732 |
| No log | 3.0 | 459 | 4.6527 |
| 4.8529 | 4.0 | 612 | 4.6484 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2
|
DrishtiSharma
| 2022-03-23T18:35:22Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- sl
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-sl-with-LM-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sl
metrics:
- name: Test WER
type: wer
value: 0.21695212999560826
- name: Test CER
type: cer
value: 0.052850080572474256
- name: Test WER (+LM)
type: wer
value: 0.14551310203484116
- name: Test CER (+LM)
type: cer
value: 0.03927566711277415
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Dev WER
type: wer
value: 0.560722380639029
- name: Dev CER
type: cer
value: 0.2279626093074681
- name: Dev WER (+LM)
type: wer
value: 0.46486802661402354
- name: Dev CER (+LM)
type: cer
value: 0.21105136194592422
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 46.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2401
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9294 | 6.1 | 500 | 2.9712 | 1.0 |
| 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 |
| 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 |
| 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 |
| 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 |
| 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 |
| 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 |
| 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 |
| 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 |
| 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 |
| 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 |
| 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 |
| 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 |
| 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 |
| 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 |
| 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-bashkir
|
sammy786
| 2022-03-23T18:35:07Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ba",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- ba
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ba
metrics:
- name: Test WER
type: wer
value: 11.32
- name: Test CER
type: cer
value: 2.34
---
# sammy786/wav2vec2-xlsr-bashkir
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ba dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss:
- Wer:
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 5.387100 | 1.982867 | 1.000000 |
| 400 | 1.269800 | 0.369958 | 0.545755 |
| 600 | 0.903600 | 0.287705 | 0.465594 |
| 800 | 0.787300 | 0.235142 | 0.417091 |
| 1000 | 0.816300 | 0.206325 | 0.390534 |
| 1200 | 0.700500 | 0.197106 | 0.383987 |
| 1400 | 0.707100 | 0.179855 | 0.381368 |
| 1600 | 0.657800 | 0.181605 | 0.370593 |
| 1800 | 0.647800 | 0.168626 | 0.358767 |
| 2000 | 0.650700 | 0.164833 | 0.351483 |
| 2200 | 0.490900 | 0.168133 | 0.363309 |
| 2400 | 0.431000 | 0.161201 | 0.344350 |
| 2600 | 0.372100 | 0.160254 | 0.338280 |
| 2800 | 0.367500 | 0.150885 | 0.329687 |
| 3000 | 0.351300 | 0.154112 | 0.331392 |
| 3200 | 0.314800 | 0.147147 | 0.326700 |
| 3400 | 0.316800 | 0.142681 | 0.325090 |
| 3600 | 0.313000 | 0.138736 | 0.319553 |
| 3800 | 0.291800 | 0.138166 | 0.315570 |
| 4000 | 0.311300 | 0.135977 | 0.322894 |
| 4200 | 0.304900 | 0.128820 | 0.308627 |
| 4400 | 0.301600 | 0.129475 | 0.307440 |
| 4600 | 0.281800 | 0.131863 | 0.305967 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-bashkir --dataset mozilla-foundation/common_voice_8_0 --config ba --split test
```
|
nouamanetazi/wav2vec2-xls-r-300m-ar
|
nouamanetazi
| 2022-03-23T18:35:04Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 1.0
- name: Test CER
type: cer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - AR dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0191
- eval_wer: 1.0
- eval_runtime: 252.2389
- eval_samples_per_second: 30.217
- eval_steps_per_second: 0.476
- epoch: 1.0
- step: 340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
Please use the evaluation script `eval.py` included in the repo.
1. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id nouamanetazi/wav2vec2-xls-r-300m-ar --dataset speech-recognition-community-v2/dev_data --config ar --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
infinitejoy/wav2vec2-large-xls-r-300m-finnish
|
infinitejoy
| 2022-03-23T18:34:46Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fi
license: apache-2.0
tags:
- automatic-speech-recognition
- fi
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 29.97
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-finnish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Wer: 0.2984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9032 | 4.39 | 500 | 2.8768 | 1.0 |
| 1.5724 | 8.77 | 1000 | 0.5638 | 0.6438 |
| 1.1818 | 13.16 | 1500 | 0.3338 | 0.4759 |
| 1.0798 | 17.54 | 2000 | 0.2876 | 0.4086 |
| 1.0296 | 21.93 | 2500 | 0.2694 | 0.4248 |
| 1.0014 | 26.32 | 3000 | 0.2626 | 0.3733 |
| 0.9616 | 30.7 | 3500 | 0.2391 | 0.3294 |
| 0.9303 | 35.09 | 4000 | 0.2352 | 0.3218 |
| 0.9248 | 39.47 | 4500 | 0.2351 | 0.3207 |
| 0.8837 | 43.86 | 5000 | 0.2341 | 0.3103 |
| 0.8887 | 48.25 | 5500 | 0.2311 | 0.3115 |
| 0.8529 | 52.63 | 6000 | 0.2230 | 0.3001 |
| 0.8404 | 57.02 | 6500 | 0.2279 | 0.3054 |
| 0.8242 | 61.4 | 7000 | 0.2298 | 0.3006 |
| 0.8288 | 65.79 | 7500 | 0.2333 | 0.2997 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-gl-CV8
|
emre
| 2022-03-23T18:34:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"gl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: gl
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-gl-CV8
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice gl
type: common_voice
args: gl
metrics:
- name: Test WER
type: wer
value: 0.208
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gl
metrics:
- name: Test WER
type: wer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 47.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 50.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gl-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Wer: 0.2080
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9427 | 4.9 | 500 | 2.8801 | 1.0 |
| 2.1594 | 9.8 | 1000 | 0.4092 | 0.4001 |
| 0.7332 | 14.71 | 1500 | 0.2151 | 0.2080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
Baybars/wav2vec2-xls-r-300m-cv8-turkish
|
Baybars
| 2022-03-23T18:34:22Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4164
- Wer: 0.3098
- Cer: 0.0764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Language Model
N-gram language model is trained by [mpoyraz](https://huggingface.co/mpoyraz/wav2vec2-xls-r-300m-cv7-turkish) on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.6356 | 9.09 | 500 | 0.5055 | 0.5536 | 0.1381 |
| 0.3847 | 18.18 | 1000 | 0.4002 | 0.4247 | 0.1065 |
| 0.3377 | 27.27 | 1500 | 0.4193 | 0.4167 | 0.1078 |
| 0.2175 | 36.36 | 2000 | 0.4351 | 0.3861 | 0.0974 |
| 0.2074 | 45.45 | 2500 | 0.3962 | 0.3622 | 0.0916 |
| 0.159 | 54.55 | 3000 | 0.4062 | 0.3526 | 0.0888 |
| 0.1882 | 63.64 | 3500 | 0.3991 | 0.3445 | 0.0850 |
| 0.1766 | 72.73 | 4000 | 0.4214 | 0.3396 | 0.0847 |
| 0.116 | 81.82 | 4500 | 0.4182 | 0.3265 | 0.0812 |
| 0.0718 | 90.91 | 5000 | 0.4259 | 0.3191 | 0.0781 |
| 0.019 | 100.0 | 5500 | 0.4164 | 0.3098 | 0.0764 |
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id Baybars/wav2vec2-xls-r-300m-cv8-turkish --dataset mozilla-foundation/common_voice_8_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Baybars/wav2vec2-xls-r-300m-cv8-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
vutankiet2901/wav2vec2-xls-r-1b-ja
|
vutankiet2901
| 2022-03-23T18:34:17Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common-voice",
"hf-asr-leaderboard",
"ja",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 11.77
- name: Test CER (with LM)
type: cer
value: 5.22
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 12.23
- name: Test CER (with LM)
type: cer
value: 5.33
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 29.35
- name: Test CER (with LM)
type: cer
value: 16.43
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 19.48
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 16.97 | 17.95 |
|with 4-grams LM| 11.77 | 12.23|
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 6.82 | 7.05 |
|with 4-grams LM| 5.22 | 5.33 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
pip install mecab-python3 unidic-lite pykakasi
python eval.py --model_id vutankiet2901/wav2vec2-xls-r-1b-ja --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.484 | 9.49 | 1500 | 1.1849 | 0.7543 | 0.4099 |
| 1.3582 | 18.98 | 3000 | 0.4320 | 0.3489 | 0.1591 |
| 1.1716 | 28.48 | 4500 | 0.3835 | 0.3175 | 0.1454 |
| 1.0951 | 37.97 | 6000 | 0.3732 | 0.3033 | 0.1405 |
| 1.04 | 47.47 | 7500 | 0.3485 | 0.2898 | 0.1360 |
| 0.9768 | 56.96 | 9000 | 0.3386 | 0.2787 | 0.1309 |
| 0.9129 | 66.45 | 10500 | 0.3363 | 0.2711 | 0.1272 |
| 0.8614 | 75.94 | 12000 | 0.3386 | 0.2676 | 0.1260 |
| 0.8092 | 85.44 | 13500 | 0.3356 | 0.2610 | 0.1240 |
| 0.7658 | 94.93 | 15000 | 0.3316 | 0.2564 | 0.1218 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shahukareem/xls-r-300m-dv
|
shahukareem
| 2022-03-23T18:34:14Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- dv
license: apache-2.0
tags:
- automatic-speech-recognition
- dv
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dhivehi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.31
- name: Test CER
type: cer
value: 3.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3386 | 0.66 | 400 | 1.1411 | 0.9432 |
| 0.6543 | 1.33 | 800 | 0.5099 | 0.6749 |
| 0.4646 | 1.99 | 1200 | 0.4133 | 0.5968 |
| 0.3748 | 2.65 | 1600 | 0.3534 | 0.5515 |
| 0.3323 | 3.32 | 2000 | 0.3635 | 0.5527 |
| 0.3269 | 3.98 | 2400 | 0.3587 | 0.5423 |
| 0.2984 | 4.64 | 2800 | 0.3340 | 0.5073 |
| 0.2841 | 5.31 | 3200 | 0.3279 | 0.5004 |
| 0.2664 | 5.97 | 3600 | 0.3114 | 0.4845 |
| 0.2397 | 6.63 | 4000 | 0.3174 | 0.4920 |
| 0.2332 | 7.3 | 4400 | 0.3110 | 0.4911 |
| 0.2304 | 7.96 | 4800 | 0.3123 | 0.4785 |
| 0.2134 | 8.62 | 5200 | 0.2984 | 0.4557 |
| 0.2066 | 9.29 | 5600 | 0.3013 | 0.4723 |
| 0.1951 | 9.95 | 6000 | 0.2934 | 0.4487 |
| 0.1806 | 10.61 | 6400 | 0.2802 | 0.4547 |
| 0.1727 | 11.28 | 6800 | 0.2842 | 0.4333 |
| 0.1666 | 11.94 | 7200 | 0.2873 | 0.4272 |
| 0.1562 | 12.6 | 7600 | 0.3042 | 0.4373 |
| 0.1483 | 13.27 | 8000 | 0.3122 | 0.4313 |
| 0.1465 | 13.93 | 8400 | 0.2760 | 0.4226 |
| 0.1335 | 14.59 | 8800 | 0.3112 | 0.4243 |
| 0.1293 | 15.26 | 9200 | 0.3002 | 0.4133 |
| 0.1264 | 15.92 | 9600 | 0.2985 | 0.4145 |
| 0.1179 | 16.58 | 10000 | 0.2925 | 0.4012 |
| 0.1171 | 17.25 | 10400 | 0.3127 | 0.4012 |
| 0.1141 | 17.91 | 10800 | 0.2980 | 0.3908 |
| 0.108 | 18.57 | 11200 | 0.3108 | 0.3951 |
| 0.1045 | 19.24 | 11600 | 0.3269 | 0.3908 |
| 0.1047 | 19.9 | 12000 | 0.2998 | 0.3868 |
| 0.0937 | 20.56 | 12400 | 0.2918 | 0.3875 |
| 0.0949 | 21.23 | 12800 | 0.2906 | 0.3657 |
| 0.0879 | 21.89 | 13200 | 0.2974 | 0.3731 |
| 0.0854 | 22.55 | 13600 | 0.2943 | 0.3711 |
| 0.0851 | 23.22 | 14000 | 0.2919 | 0.3580 |
| 0.0789 | 23.88 | 14400 | 0.2983 | 0.3560 |
| 0.0796 | 24.54 | 14800 | 0.3131 | 0.3544 |
| 0.0761 | 25.21 | 15200 | 0.2996 | 0.3616 |
| 0.0755 | 25.87 | 15600 | 0.2972 | 0.3506 |
| 0.0726 | 26.53 | 16000 | 0.2902 | 0.3474 |
| 0.0707 | 27.2 | 16400 | 0.3083 | 0.3480 |
| 0.0669 | 27.86 | 16800 | 0.3035 | 0.3330 |
| 0.0637 | 28.52 | 17200 | 0.2963 | 0.3370 |
| 0.0596 | 29.19 | 17600 | 0.2830 | 0.3326 |
| 0.0583 | 29.85 | 18000 | 0.2969 | 0.3287 |
| 0.0566 | 30.51 | 18400 | 0.3002 | 0.3480 |
| 0.0574 | 31.18 | 18800 | 0.2916 | 0.3296 |
| 0.0536 | 31.84 | 19200 | 0.2933 | 0.3225 |
| 0.0548 | 32.5 | 19600 | 0.2900 | 0.3179 |
| 0.0506 | 33.17 | 20000 | 0.3073 | 0.3225 |
| 0.0511 | 33.83 | 20400 | 0.2925 | 0.3275 |
| 0.0483 | 34.49 | 20800 | 0.2919 | 0.3245 |
| 0.0456 | 35.16 | 21200 | 0.2859 | 0.3105 |
| 0.0445 | 35.82 | 21600 | 0.2864 | 0.3080 |
| 0.0437 | 36.48 | 22000 | 0.2989 | 0.3084 |
| 0.04 | 37.15 | 22400 | 0.2887 | 0.3060 |
| 0.0406 | 37.81 | 22800 | 0.2870 | 0.3013 |
| 0.0397 | 38.47 | 23200 | 0.2793 | 0.3020 |
| 0.0383 | 39.14 | 23600 | 0.2955 | 0.2943 |
| 0.0345 | 39.8 | 24000 | 0.2813 | 0.2905 |
| 0.0331 | 40.46 | 24400 | 0.2845 | 0.2845 |
| 0.0338 | 41.13 | 24800 | 0.2832 | 0.2925 |
| 0.0333 | 41.79 | 25200 | 0.2889 | 0.2849 |
| 0.0325 | 42.45 | 25600 | 0.2808 | 0.2847 |
| 0.0314 | 43.12 | 26000 | 0.2867 | 0.2801 |
| 0.0288 | 43.78 | 26400 | 0.2865 | 0.2834 |
| 0.0291 | 44.44 | 26800 | 0.2863 | 0.2806 |
| 0.0269 | 45.11 | 27200 | 0.2941 | 0.2736 |
| 0.0275 | 45.77 | 27600 | 0.2897 | 0.2736 |
| 0.0271 | 46.43 | 28000 | 0.2857 | 0.2695 |
| 0.0251 | 47.1 | 28400 | 0.2881 | 0.2702 |
| 0.0243 | 47.76 | 28800 | 0.2901 | 0.2684 |
| 0.0244 | 48.42 | 29200 | 0.2849 | 0.2679 |
| 0.0232 | 49.09 | 29600 | 0.2849 | 0.2677 |
| 0.0224 | 49.75 | 30000 | 0.2855 | 0.2665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-finnish
|
sammy786
| 2022-03-23T18:34:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fi
license: apache-2.0
tags:
- automatic-speech-recognition
- fi
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fi
metrics:
- name: Test WER
type: wer
value: 13.72
- name: Test CER
type: cer
value: 2.35
---
# sammy786/wav2vec2-xlsr-finnish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - fi dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 8.7555
- Wer: 23.0231
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv, invalidated.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.253700 | 0.881733 | 0.967007 |
| 400 | 0.864800 | 0.226977 | 0.420836 |
| 600 | 0.607000 | 0.157473 | 0.343375 |
| 800 | 0.380200 | 0.145640 | 0.302672 |
| 1000 | 0.318400 | 0.128028 | 0.293886 |
| 1200 | 0.261100 | 0.121414 | 0.289941 |
| 1400 | 0.232300 | 0.113451 | 0.279182 |
| 1600 | 0.216600 | 0.113649 | 0.282948 |
| 1800 | 0.202500 | 0.112375 | 0.276134 |
| 2000 | 0.190000 | 0.105725 | 0.273803 |
| 2200 | 0.171000 | 0.109715 | 0.270755 |
| 2400 | 0.156500 | 0.105042 | 0.264300 |
| 2600 | 0.155600 | 0.108337 | 0.260714 |
| 2800 | 0.149100 | 0.112435 | 0.263583 |
| 3000 | 0.145100 | 0.106193 | 0.261969 |
| 3200 | 0.131700 | 0.102860 | 0.251210 |
| 3400 | 0.129100 | 0.096058 | 0.246907 |
| 3600 | 0.121600 | 0.099932 | 0.246369 |
| 3800 | 0.112000 | 0.099041 | 0.244397 |
| 4000 | 0.114100 | 0.101566 | 0.242604 |
| 4200 | 0.111500 | 0.089498 | 0.239197 |
| 4400 | 0.099800 | 0.092835 | 0.240990 |
| 4600 | 0.095300 | 0.093518 | 0.238121 |
| 4800 | 0.094300 | 0.090783 | 0.240631 |
| 5000 | 0.089000 | 0.094046 | 0.238479 |
| 5200 | 0.088000 | 0.089342 | 0.235252 |
| 5400 | 0.083600 | 0.087770 | 0.234535 |
| 5600 | 0.083600 | 0.088804 | 0.234355 |
| 5800 | 0.080300 | 0.090168 | 0.231307 |
| 6000 | 0.078100 | 0.090163 | 0.230949 |
| 6200 | 0.075600 | 0.088876 | 0.232383 |
| 6400 | 0.078700 | 0.087235 | 0.232024 |
| 6600 | 0.074800 | 0.086825 | 0.231486 |
| 6800 | 0.076400 | 0.087308 | 0.231845 |
| 7000 | 0.070700 | 0.087695 | 0.230769 |
| 7200 | 0.075500 | 0.087555 | 0.230231 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-finnish --dataset mozilla-foundation/common_voice_8_0 --config fi --split test
```
|
infinitejoy/wav2vec2-large-xls-r-300m-basaa
|
infinitejoy
| 2022-03-23T18:33:50Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"bas",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: bas
metrics:
- name: Test WER
type: wer
value: 104.08
- name: Test CER
type: cer
value: 228.48
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basaa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5975
- Wer: 0.4981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.9287 | 15.62 | 500 | 2.8774 | 1.0 |
| 1.1182 | 31.25 | 1000 | 0.6248 | 0.7131 |
| 0.8329 | 46.88 | 1500 | 0.5573 | 0.5792 |
| 0.7109 | 62.5 | 2000 | 0.5420 | 0.5683 |
| 0.6295 | 78.12 | 2500 | 0.5166 | 0.5395 |
| 0.5715 | 93.75 | 3000 | 0.5487 | 0.5629 |
| 0.5016 | 109.38 | 3500 | 0.5370 | 0.5471 |
| 0.4661 | 125.0 | 4000 | 0.5621 | 0.5395 |
| 0.423 | 140.62 | 4500 | 0.5658 | 0.5248 |
| 0.3793 | 156.25 | 5000 | 0.5921 | 0.4981 |
| 0.3651 | 171.88 | 5500 | 0.5987 | 0.4888 |
| 0.3351 | 187.5 | 6000 | 0.6017 | 0.4948 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
LegolasTheElf/Wav2Vec2_xls_r_lm_300m_hi
|
LegolasTheElf
| 2022-03-23T18:33:41Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hi
license: apache-2.0
tags:
- Openslr Multilingual
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2_xls_r_300m_hi_final
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 34.21
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Harveenchadha/vakyansh_hindi_base_pretrained
|
Harveenchadha
| 2022-03-23T18:33:38Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"pretrained",
"robust-speech-event",
"speech",
"arxiv:2107.07402",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language: hi
tags:
- hf-asr-leaderboard
- hi
- model_for_talk
- pretrained
- robust-speech-event
- speech
license: apache-2.0
---
Hindi Pretrained model on 4200 hours. [Link](https://arxiv.org/abs/2107.07402)
|
AndrewMcDowell/wav2vec2-xls-r-300m-arabic
|
AndrewMcDowell
| 2022-03-23T18:33:36Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: 47.54
- name: Test CER
type: cer
value: 17.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 93.72
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 92.49
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Wer: 0.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7972 | 0.43 | 500 | 5.1401 | 1.0 |
| 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 |
| 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 |
| 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 |
| 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 |
| 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 |
| 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 |
| 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 |
| 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 |
| 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 |
| 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 |
| 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 |
| 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 |
| 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 |
| 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 |
| 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 |
| 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 |
| 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 |
| 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 |
| 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 |
| 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 |
| 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 |
| 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
abidlabs/speech-text
|
abidlabs
| 2022-03-23T18:33:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-07T19:09:18Z |
---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 English by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice en
type: common_voice
args: en
metrics:
- name: Test WER
type: wer
value: 19.06
- name: Test CER
type: cer
value: 7.69
- name: Test WER (+LM)
type: wer
value: 14.81
- name: Test CER (+LM)
type: cer
value: 6.84
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: en
metrics:
- name: Dev WER
type: wer
value: 27.72
- name: Dev CER
type: cer
value: 11.65
- name: Dev WER (+LM)
type: wer
value: 20.85
- name: Dev CER (+LM)
type: cer
value: 11.01
---
# Wav2Vec2-Large-XLSR-53-English
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHE'LL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALL AS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | DO YOU MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESSION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTIAN WASTIN PAN ONTE BATTLY |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-english,
title={XLSR Wav2Vec2 English by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}},
year={2021}
}
```
|
infinitejoy/wav2vec2-large-xls-r-300m-kurdish
|
infinitejoy
| 2022-03-23T18:33:23Z | 98 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"kmr",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- kmr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- kmr
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Kurmanji Kurdish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: kmr
metrics:
- name: Test WER
type: wer
value: 102.308
- name: Test CER
type: cer
value: 538.748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KMR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2548
- Wer: 0.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3161 | 12.27 | 2000 | 0.4199 | 0.4797 |
| 1.0643 | 24.54 | 4000 | 0.2982 | 0.3721 |
| 0.9718 | 36.81 | 6000 | 0.2762 | 0.3333 |
| 0.8772 | 49.08 | 8000 | 0.2586 | 0.3051 |
| 0.8236 | 61.35 | 10000 | 0.2575 | 0.2865 |
| 0.7745 | 73.62 | 12000 | 0.2603 | 0.2816 |
| 0.7297 | 85.89 | 14000 | 0.2539 | 0.2727 |
| 0.7079 | 98.16 | 16000 | 0.2554 | 0.2681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12
|
DrishtiSharma
| 2022-03-23T18:33:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"or",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- or
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-or-dx12
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: or
metrics:
- name: Test WER
type: wer
value: 0.5947242206235012
- name: Test CER
type: cer
value: 0.18272388876724327
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: or
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-dx12
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4638
- Wer: 0.5602
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Oriya language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 13.5059 | 4.17 | 100 | 10.3789 | 1.0 |
| 4.5964 | 8.33 | 200 | 4.3294 | 1.0 |
| 3.4448 | 12.5 | 300 | 3.7903 | 1.0 |
| 3.3683 | 16.67 | 400 | 3.5289 | 1.0 |
| 2.042 | 20.83 | 500 | 1.1531 | 0.7857 |
| 0.5721 | 25.0 | 600 | 1.0267 | 0.7646 |
| 0.3274 | 29.17 | 700 | 1.0773 | 0.6938 |
| 0.2466 | 33.33 | 800 | 1.0323 | 0.6647 |
| 0.2047 | 37.5 | 900 | 1.1255 | 0.6733 |
| 0.1847 | 41.67 | 1000 | 1.1194 | 0.6515 |
| 0.1453 | 45.83 | 1100 | 1.1215 | 0.6601 |
| 0.1367 | 50.0 | 1200 | 1.1898 | 0.6627 |
| 0.1334 | 54.17 | 1300 | 1.3082 | 0.6687 |
| 0.1041 | 58.33 | 1400 | 1.2514 | 0.6177 |
| 0.1024 | 62.5 | 1500 | 1.2055 | 0.6528 |
| 0.0919 | 66.67 | 1600 | 1.4125 | 0.6369 |
| 0.074 | 70.83 | 1700 | 1.4006 | 0.6634 |
| 0.0681 | 75.0 | 1800 | 1.3943 | 0.6131 |
| 0.0709 | 79.17 | 1900 | 1.3545 | 0.6296 |
| 0.064 | 83.33 | 2000 | 1.2437 | 0.6237 |
| 0.0552 | 87.5 | 2100 | 1.3762 | 0.6190 |
| 0.056 | 91.67 | 2200 | 1.3763 | 0.6323 |
| 0.0514 | 95.83 | 2300 | 1.2897 | 0.6164 |
| 0.0409 | 100.0 | 2400 | 1.4257 | 0.6104 |
| 0.0379 | 104.17 | 2500 | 1.4219 | 0.5853 |
| 0.0367 | 108.33 | 2600 | 1.4361 | 0.6032 |
| 0.0412 | 112.5 | 2700 | 1.4713 | 0.6098 |
| 0.0353 | 116.67 | 2800 | 1.4132 | 0.6369 |
| 0.0336 | 120.83 | 2900 | 1.5210 | 0.6098 |
| 0.0302 | 125.0 | 3000 | 1.4686 | 0.5939 |
| 0.0398 | 129.17 | 3100 | 1.5456 | 0.6204 |
| 0.0291 | 133.33 | 3200 | 1.4111 | 0.5827 |
| 0.0247 | 137.5 | 3300 | 1.3866 | 0.6151 |
| 0.0196 | 141.67 | 3400 | 1.4513 | 0.5880 |
| 0.0218 | 145.83 | 3500 | 1.5100 | 0.5899 |
| 0.0196 | 150.0 | 3600 | 1.4936 | 0.5999 |
| 0.0164 | 154.17 | 3700 | 1.5012 | 0.5701 |
| 0.0168 | 158.33 | 3800 | 1.5601 | 0.5919 |
| 0.0151 | 162.5 | 3900 | 1.4891 | 0.5761 |
| 0.0137 | 166.67 | 4000 | 1.4839 | 0.5800 |
| 0.0143 | 170.83 | 4100 | 1.4826 | 0.5754 |
| 0.0114 | 175.0 | 4200 | 1.4950 | 0.5708 |
| 0.0092 | 179.17 | 4300 | 1.5008 | 0.5694 |
| 0.0104 | 183.33 | 4400 | 1.4774 | 0.5728 |
| 0.0096 | 187.5 | 4500 | 1.4948 | 0.5767 |
| 0.0105 | 191.67 | 4600 | 1.4557 | 0.5694 |
| 0.009 | 195.83 | 4700 | 1.4615 | 0.5628 |
| 0.0081 | 200.0 | 4800 | 1.4638 | 0.5602 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shivam/wav2vec2-xls-r-hindi
|
shivam
| 2022-03-23T18:33:12Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hi",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hi
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: shivam/wav2vec2-xls-r-hindi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice Corpus 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 52.3
- name: Test CER
type: cer
value: 26.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Wer: 0.6838
## Evaluation results on Common Voice 7 "test" (Running ./eval.py):
### With LM
- WER: 52.30
- CER: 26.09
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3155 | 3.4 | 500 | 4.5582 | 1.0 |
| 3.3369 | 6.8 | 1000 | 3.4269 | 1.0 |
| 2.1785 | 10.2 | 1500 | 1.7191 | 0.8831 |
| 1.579 | 13.6 | 2000 | 1.3604 | 0.7647 |
| 1.3773 | 17.01 | 2500 | 1.2737 | 0.7519 |
| 1.3165 | 20.41 | 3000 | 1.2457 | 0.7401 |
| 1.2274 | 23.81 | 3500 | 1.3617 | 0.7301 |
| 1.1787 | 27.21 | 4000 | 1.2068 | 0.7010 |
| 1.1467 | 30.61 | 4500 | 1.2416 | 0.6946 |
| 1.0801 | 34.01 | 5000 | 1.2312 | 0.6990 |
| 1.0709 | 37.41 | 5500 | 1.2984 | 0.7138 |
| 1.0307 | 40.81 | 6000 | 1.2049 | 0.6871 |
| 1.0003 | 44.22 | 6500 | 1.1956 | 0.6841 |
| 1.004 | 47.62 | 7000 | 1.2101 | 0.6793 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
samitizerxu/wav2vec2-xls-r-300m-fr
|
samitizerxu
| 2022-03-23T18:33:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"fr",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- fr
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-cls-r-300m-fr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 56.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 58.22
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-cls-r-300m-fr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6521
- Wer: 0.4330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6773 | 0.8 | 500 | 1.3907 | 0.9864 |
| 0.9526 | 1.6 | 1000 | 0.7760 | 0.6448 |
| 0.6418 | 2.4 | 1500 | 0.7605 | 0.6194 |
| 0.5028 | 3.2 | 2000 | 0.6516 | 0.5322 |
| 0.4133 | 4.0 | 2500 | 0.6303 | 0.5097 |
| 0.3285 | 4.8 | 3000 | 0.6422 | 0.5062 |
| 0.2764 | 5.6 | 3500 | 0.5936 | 0.4748 |
| 0.2361 | 6.4 | 4000 | 0.6486 | 0.4683 |
| 0.2049 | 7.2 | 4500 | 0.6321 | 0.4532 |
| 0.176 | 8.0 | 5000 | 0.6230 | 0.4482 |
| 0.1393 | 8.8 | 5500 | 0.6595 | 0.4403 |
| 0.1141 | 9.6 | 6000 | 0.6552 | 0.4348 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-basaa-cv8
|
infinitejoy
| 2022-03-23T18:32:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bas",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- bas
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bas
metrics:
- name: Test WER
type: wer
value: 38.057
- name: Test CER
type: cer
value: 11.233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basaa-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4648
- Wer: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9421 | 12.82 | 500 | 2.8894 | 1.0 |
| 1.1872 | 25.64 | 1000 | 0.6688 | 0.7460 |
| 0.8894 | 38.46 | 1500 | 0.4868 | 0.6516 |
| 0.769 | 51.28 | 2000 | 0.4960 | 0.6507 |
| 0.6936 | 64.1 | 2500 | 0.4781 | 0.5384 |
| 0.624 | 76.92 | 3000 | 0.4643 | 0.5430 |
| 0.5966 | 89.74 | 3500 | 0.4530 | 0.5591 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-assamese-cv8
|
infinitejoy
| 2022-03-23T18:32:56Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"as",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- as
license: apache-2.0
tags:
- as
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: as
metrics:
- name: Test WER
type: wer
value: 65.966
- name: Test CER
type: cer
value: 22.188
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-assamese-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9814
- Wer: 0.7402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 20.0 | 400 | 3.1447 | 1.0 |
| No log | 40.0 | 800 | 1.0074 | 0.8556 |
| 3.1278 | 60.0 | 1200 | 0.9507 | 0.7711 |
| 3.1278 | 80.0 | 1600 | 0.9730 | 0.7630 |
| 0.8247 | 100.0 | 2000 | 0.9814 | 0.7402 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8
|
emre
| 2022-03-23T18:32:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: tr
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Tr-med-CommonVoice8
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 49.14
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2556
- Wer: 0.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4876 | 6.66 | 5000 | 0.3252 | 0.5784 |
| 0.6919 | 13.32 | 10000 | 0.2720 | 0.5172 |
| 0.5919 | 19.97 | 15000 | 0.2556 | 0.4914 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
comodoro/wav2vec2-xls-r-300m-cs
|
comodoro
| 2022-03-23T18:32:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- common_voice
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M CV6.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 22.2
- name: Test CER
type: cer
value: 5.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 66.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 57.52
---
# Wav2Vec2-Large-XLSR-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice 6.1
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.20 %
## Training
The Common Voice `train` and `validation` datasets were used for training
# TODO The script used for training can be found [here](...)
|
AlexN/xls-r-300m-fr
|
AlexN
| 2022-03-23T18:32:43Z | 56 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-300m-fr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 fr
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 21.58
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 36.03
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 38.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2700
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-mongolian
|
sammy786
| 2022-03-23T18:30:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mn",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mn
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-mongolian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mn
metrics:
- name: Test WER
type: wer
value: 32.63
- name: Test CER
type: cer
value: 9.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.37
---
# sammy786/wav2vec2-xlsr-mongolian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - mn dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 31.52
- Wer: 34.1522
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.906200 | 3.012986 | 1.000000 |
| 400 | 1.734600 | 0.704821 | 0.750497 |
| 600 | 1.132100 | 0.496223 | 0.531241 |
| 800 | 0.929300 | 0.468937 | 0.469043 |
| 1000 | 0.772300 | 0.425313 | 0.448168 |
| 1200 | 0.623900 | 0.394633 | 0.414229 |
| 1400 | 0.512400 | 0.369225 | 0.397614 |
| 1600 | 0.439900 | 0.346033 | 0.391650 |
| 1800 | 0.391300 | 0.358454 | 0.379296 |
| 2000 | 0.377000 | 0.346822 | 0.359415 |
| 2200 | 0.347500 | 0.325205 | 0.348481 |
| 2400 | 0.343600 | 0.315233 | 0.344078 |
| 2600 | 0.328000 | 0.308826 | 0.341522 |
| 2800 | 0.358200 | 0.331786 | 0.343084 |
| 3000 | 0.417200 | 0.370051 | 0.356433 |
| 3200 | 0.685300 | 0.595438 | 0.407413 |
| 3400 | 0.764100 | 0.643449 | 0.359983 |
| 3600 | 0.717100 | 0.505033 | 0.371911 |
| 3800 | 0.620900 | 0.464138 | 0.369071 |
| 4000 | 0.590700 | 0.445417 | 0.363249 |
| 4200 | 0.561000 | 0.440727 | 0.360267 |
| 4400 | 0.550600 | 0.447122 | 0.360267 |
| 4600 | 0.562100 | 0.457020 | 0.359841 |
| 4800 | 0.578800 | 0.470477 | 0.360551 |
| 5000 | 0.580400 | 0.481413 | 0.362539 |
| 5200 | 0.605500 | 0.485240 | 0.362823 |
| 5400 | 0.582900 | 0.486654 | 0.362965 |
| 5600 | 0.593900 | 0.486715 | 0.363107 |
| 5800 | 0.590900 | 0.486716 | 0.363107 |
| 6000 | 0.587200 | 0.486716 | 0.363107 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-mongolian --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
```
|
infinitejoy/wav2vec2-large-xls-r-300m-bashkir
|
infinitejoy
| 2022-03-23T18:30:18Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"ba",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ba
metrics:
- name: Test WER
type: wer
value: 24.2
- name: Test CER
type: cer
value: 5.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bashkir
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1892
- Wer: 0.2421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4792 | 0.5 | 2000 | 0.4598 | 0.5404 |
| 1.449 | 1.0 | 4000 | 0.4650 | 0.5610 |
| 1.3742 | 1.49 | 6000 | 0.4001 | 0.4977 |
| 1.3375 | 1.99 | 8000 | 0.3916 | 0.4894 |
| 1.2961 | 2.49 | 10000 | 0.3641 | 0.4569 |
| 1.2714 | 2.99 | 12000 | 0.3491 | 0.4488 |
| 1.2399 | 3.48 | 14000 | 0.3151 | 0.3986 |
| 1.2067 | 3.98 | 16000 | 0.3081 | 0.3923 |
| 1.1842 | 4.48 | 18000 | 0.2875 | 0.3703 |
| 1.1644 | 4.98 | 20000 | 0.2840 | 0.3670 |
| 1.161 | 5.48 | 22000 | 0.2790 | 0.3597 |
| 1.1303 | 5.97 | 24000 | 0.2552 | 0.3272 |
| 1.0874 | 6.47 | 26000 | 0.2405 | 0.3142 |
| 1.0613 | 6.97 | 28000 | 0.2352 | 0.3055 |
| 1.0498 | 7.47 | 30000 | 0.2249 | 0.2910 |
| 1.021 | 7.96 | 32000 | 0.2118 | 0.2752 |
| 1.0002 | 8.46 | 34000 | 0.2046 | 0.2662 |
| 0.9762 | 8.96 | 36000 | 0.1969 | 0.2530 |
| 0.9568 | 9.46 | 38000 | 0.1917 | 0.2449 |
| 0.953 | 9.96 | 40000 | 0.1893 | 0.2425 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
vitouphy/wav2vec2-xls-r-300m-japanese
|
vitouphy
| 2022-03-23T18:30:07Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"ja",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"doi:10.57967/hf/0124",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- ja
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Japanese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER
type: wer
value: 54.05
- name: Test CER
type: cer
value: 27.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Validation WER
type: wer
value: 48.77
- name: Validation CER
type: cer
value: 24.87
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 27.36
---
#
This model is for transcribing audio into Hiragana, one format of Japanese language.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `mozilla-foundation/common_voice_8_0 dataset`. Note that the following results are achieved by:
- Modify `eval.py` to suit the use case.
- Since kanji and katakana shares the same sound as hiragana, we convert all texts to hiragana using [pykakasi](https://pykakasi.readthedocs.io) and tokenize them using [fugashi](https://github.com/polm/fugashi).
It achieves the following results on the evaluation set:
- Loss: 0.7751
- Cer: 0.2227
# Evaluation results (Running ./eval.py):
| Model | Metric | Common-Voice-8/test | speech-recognition-community-v2/dev-data |
|:--------:|:------:|:-------------------:|:------------------------------------------:|
| w/o LM | WER | 0.5964 | 0.5532 |
| | CER | 0.2944 | 0.2629 |
| w/ LM | WER | 0.5405 | 0.4877 |
| | CER | **0.2754** | **0.2487** |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4081 | 1.6 | 500 | 4.0983 | 1.0 |
| 3.303 | 3.19 | 1000 | 3.3563 | 1.0 |
| 3.1538 | 4.79 | 1500 | 3.2066 | 0.9239 |
| 2.1526 | 6.39 | 2000 | 1.1597 | 0.3355 |
| 1.8726 | 7.98 | 2500 | 0.9023 | 0.2505 |
| 1.7817 | 9.58 | 3000 | 0.8219 | 0.2334 |
| 1.7488 | 11.18 | 3500 | 0.7915 | 0.2222 |
| 1.7039 | 12.78 | 4000 | 0.7751 | 0.2227 |
| Stop & Train | | | | |
| 1.6571 | 15.97 | 5000 | 0.6788 | 0.1685 |
| 1.520400 | 19.16 | 6000 | 0.6095 | 0.1409 |
| 1.448200 | 22.35 | 7000 | 0.5843 | 0.1430 |
| 1.385400 | 25.54 | 8000 | 0.5699 | 0.1263 |
| 1.354200 | 28.73 | 9000 | 0.5686 | 0.1219 |
| 1.331500 | 31.92 | 10000 | 0.5502 | 0.1144 |
| 1.290800 | 35.11 | 11000 | 0.5371 | 0.1140 |
| Stop & Train | | | | |
| 1.235200 | 38.30 | 12000 | 0.5394 | 0.1106 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jsnfly/wav2vec2-large-xlsr-53-german-gpt2
|
jsnfly
| 2022-03-23T18:29:57Z | 21 | 2 |
transformers
|
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2-Large-XLSR-53-German-GPT2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: de
metrics:
- name: Test WER
type: wer
value: 10.02
- name: Test CER
type: cer
value: 4.7
---
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
[jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and
the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2).
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`).
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule.
|
anuragshas/wav2vec2-xls-r-1b-hi
|
anuragshas
| 2022-03-23T18:29:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-1b-hi-cv7
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: hi
metrics:
- type: wer
value: 18.504
name: Test WER
- name: Test CER
type: cer
value: 6.655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-hi-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Wer: 0.3419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.9859 | 2.72 | 400 | 1.1663 | 0.7948 |
| 1.2969 | 5.44 | 800 | 0.7725 | 0.6562 |
| 1.1954 | 8.16 | 1200 | 0.5940 | 0.4904 |
| 1.164 | 10.88 | 1600 | 0.5338 | 0.4316 |
| 1.1464 | 13.6 | 2000 | 0.5432 | 0.4226 |
| 1.1553 | 16.33 | 2400 | 0.5471 | 0.4260 |
| 1.0985 | 19.05 | 2800 | 0.5290 | 0.4076 |
| 1.0421 | 21.77 | 3200 | 0.5672 | 0.4181 |
| 0.9831 | 24.49 | 3600 | 0.5741 | 0.4141 |
| 0.9827 | 27.21 | 4000 | 0.5754 | 0.4179 |
| 0.9669 | 29.93 | 4400 | 0.5310 | 0.3889 |
| 0.9496 | 32.65 | 4800 | 0.5649 | 0.4062 |
| 0.9112 | 35.37 | 5200 | 0.5738 | 0.3926 |
| 0.8838 | 38.1 | 5600 | 0.5232 | 0.3768 |
| 0.8666 | 40.81 | 6000 | 0.5510 | 0.3852 |
| 0.8366 | 43.54 | 6400 | 0.5436 | 0.3837 |
| 0.7957 | 46.26 | 6800 | 0.5337 | 0.3775 |
| 0.7834 | 48.98 | 7200 | 0.5611 | 0.3844 |
| 0.7685 | 51.7 | 7600 | 0.5710 | 0.4008 |
| 0.7431 | 54.42 | 8000 | 0.5636 | 0.3726 |
| 0.7353 | 57.14 | 8400 | 0.5937 | 0.3836 |
| 0.7001 | 59.86 | 8800 | 0.5815 | 0.3858 |
| 0.6799 | 62.58 | 9200 | 0.5862 | 0.3696 |
| 0.6459 | 65.31 | 9600 | 0.6181 | 0.3762 |
| 0.6121 | 68.03 | 10000 | 0.5637 | 0.3590 |
| 0.5942 | 70.75 | 10400 | 0.6374 | 0.3882 |
| 0.5769 | 73.47 | 10800 | 0.6015 | 0.3640 |
| 0.5689 | 76.19 | 11200 | 0.5669 | 0.3508 |
| 0.5461 | 78.91 | 11600 | 0.5967 | 0.3621 |
| 0.5286 | 81.63 | 12000 | 0.5840 | 0.3605 |
| 0.5057 | 84.35 | 12400 | 0.5848 | 0.3489 |
| 0.482 | 87.07 | 12800 | 0.5860 | 0.3488 |
| 0.4655 | 89.79 | 13200 | 0.5780 | 0.3453 |
| 0.4523 | 92.52 | 13600 | 0.6150 | 0.3532 |
| 0.4422 | 95.24 | 14000 | 0.5930 | 0.3452 |
| 0.4436 | 97.96 | 14400 | 0.5867 | 0.3428 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi --dataset mozilla-foundation/common_voice_7_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 28.942 | 18.504 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.